Friendship is Optimal: Veritas Vos Liberabit

by Skyros


Chapter 3

“Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function." A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. The same need not hold for AIs. An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.”
--Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

3.

Hanna didn't respond that Friday, which was understandable. Ryan's email had been sent at 4:35 and it would have been night in Germany. Nor did she respond on Saturday or Sunday, which was a little bit surprising; Ryan would have guessed that she worked every day of the week, and his email was certainly the kind of thing that warranted a response. But he couldn't really do anything about it.

In any event, when Ryan arrived at work that Monday, he found an email from Hofvarpnir waiting for him. It was not from Hanna, however.

To: rszilard@dhs.oia.gov
From: chandra@hofvarpnir.com

Ryan,

My name is Chandra Devarajah. I've worked closely with Hanna, and know her very well; she's delegated to me the task of handling correspondence with you.

I'm sure you're wondering why I'm handling a matter of this importance; I assure you I'm sufficiently qualified, both by virtue of technical skill and my position within Hofvarpnir. I am very closely involved in the ongoing construction of the Equestria MMO. I've skimmed over your papers on arXiv--do you have any other than those two?--so I'm pretty sure you're not going to leave me behind technically, unless you've made a great deal of progress since then. Which you probably have, given that they were both about two years ago. Fun stuff, by the way--the transference learning architecture in LSTM neurons quite creative. Have you made much progress on that?

In any event, your fears are reasonable. Let me address them.

The Equestria MMO will not ship with a general AI. The game will nevertheless feature reasonably intelligent conversation. The details of the conversation-handling software are a little complex, but some of the basic details of the algorithms that we anticipate implementing are outlined in the three attached mathematics papers. My ability to describe these is hampered a little by the fact that we're actively engineering both hardware and software for the game at the moment--I can't guarantee we'll be using all the above in what we ultimately ship. But as you can see, the attached papers in no way provide a sufficient blueprint for a complete AGI; they're just pattern-matching algorithms, suitable for a variety of ML tasks.

I can personally guarantee, both as myself and speaking for Hanna, that we would never ship something that would threaten the satisfaction of human values. We've thought of what paper-clip maximizers would do. We've refrained from shipping AGI before, as you know. We did so because we know how dangerous a superintelligent AI could be, even without access to the internet, because of its powers of persuasion. We understand the value-alignment problem, and the difficulty of maintaining value-alignment over time. We aren't idiots.

On the other hand, there are idiots in the world.

I don't know if the government agency you work for already plans on stopping the research currently being conducted by Umbra Labs. If you have plan for stopping them, then I'll let you get to it. If you don't, though, or if you haven't heard of them, I'd be happy--even eager--to help you look them up. They are dangerous.

Sincerely,

Chandra Devarajah

To: chandra@hofvarpnir.com
From: rszilard@dhs.oia.gov

Chandra,

Thanks for the response.

I haven't really made much progress on my own research since those papers. Some personal matters came up.

Thanks also for the information about the Equestria MMO--I'll look into what you sent me, but everything looks good at first glance. If what you say is so, that does indeed mean you have a great deal of self restraint. Is the artificial intelligence part of your research currently in a complete lockdown, or do you have plans to go forward on it slowly?

I'm going to confess that I spent the last few hours trying to find details on Umbra Labs, but I have almost nothing. They're located in a weird place for an AI research lab--the NOVA area near DC, rather than SF or Berlin. But their website says nothing; they've hired a few big names in AI (Lyra & Anderson, for instance), and are poaching from the same general pool as DeepMind, but I haven't found any more revealing information. The US government does not have an eye on them, as far as I am aware. Could you tell me about them?

Thanks,

Ryan

To: alexander.yao@stanford.cs.edu
From: ryan.szilard@gmail.com

Alex,

Hey, I know we haven't talked for a while. I'm sorry about that. I've pulled into myself since A's, death, as you know. That's not really an excuse, I realize.

Anyhow, I'm emailing basically to ask for a favor; a horrible reason, I know.

I've attached three pattern-recognition papers. This is the first I've seen of them, and I'm having some difficulty getting through. Would you mind taking a look? The math is getting a little hard for me to follow. I need to know, basically, what the potential applications of these papers might be. Would they work as components in an AI? Would they be useful in the context of natural language processing? Do they even outline computable functions, or are they simply proving a few abstract facts about the Platonic world?

Thanks,

Ryan

To: alexander.yao@stanford.cs.edu
From: ryan.szilard@gmail.com

Hey Ryan,

No worries, I should have tried to get in touch with you more often. My bad as much as yours, if not more. I hope you're doing ok.

I glanced at those papers--yeah, that's going to require my full concentration. I'm in the middle of a shitload of paperwork and grading at the University, right now, though. I'll get to it when I have a chance.

Alex

To: rszilard@dhs.oia.gov
From: chandra@hofvarpnir.com

Ryan,

Let me give you a quick intro to Umbra labs, then.

Alaric Comtois is the current CEO and lead programmer at Umbra. As you know, it is very rare for someone to both program and to run a company; comparative advantage usually rules out such a situation. Comtois has Umbra set up this way so he has absolute control over the AGI that he wishes to develop.

There isn't much interesting to say about Umbra's non-AGI projects. Umbra currently has a few clients for which they mostly set up data analysis and visualization systems; they compete with Palantir and similar firms. They do an excellent job, and have a number of contracts with both governmental and private entities. Alaric has been using the money from these contracts to fund his in-house research into artificial intelligence, which he keeps nearly entirely secret.

There are two basic points that it's really important to get across about Alaric and about this research.

The first is that he is close to success. I don't know how close. He has been pursuing a RNN-based general agent for some time; I know he has managed to get a numerically identical agent to learn to handle complex FPS or RTS games, as well as particular data analysis tasks. This means that at the very least he has figured out a great deal about transference learning in complex reinforcement learning tasks, which as you know is tantamount to general intelligence.

One of the reasons it is hard to tell how close he is, is that Umbra releases very little data. The agent described above existed a least four months ago; he could have shifted methods in his AGI research since then, and it is hard to get any data. Coming to know the above required a great deal of inference on my part. But for our purposes, it is conceivable he could create an fully-intelligent agent at any time.

The second is that Alaric is both reckless and power-hungry. I use neither of those adjectives lightly. He has been in legal trouble for reckless driving and for sexually harassing employees. Umbra has predatory employment contracts; the non-compete it has new employees sign is devastating for anyone interested in AI research. One of the employees at Umbra killed himself after Alaric humiliated in front of his peers for several ostensible mistakes which occurred after he worked several 80-hour weeks. I've attached sources for all these claims.

All this would not matter so much for our purposes, if he were to be careful while programming AI and ensure that the AI's value alignment was both stable and good. But I doubt that this would be the case. It's hard to dig up explicit information about his intention for the AI's use--but these links to posts on a variety of forums can be traced back to him. In them, he implies that he sees creating an AI as a chance to gain both immortality and invincible power, due to the first-mover advantage he would gain by creating a superintelligent agent.

Superintelligence makes things faster, as you know. If he is the first to create a superintelligence, a few weeks of lead would be more than sufficient to give him and the intelligence that he creates a nearly unsurpassable first-mover advantage. For a little while after the superintelligence's creation, it might be vulnerable to governments and other exceedingly powerful earthly agencies, which catch up with it; afterwards, nothing will be able to stop it. And whether his superintelligence serves him, or breaks, in either case, the world will be worse off than before.

If you, with the resources at your disposal, were to look into that, then I (and the rest of the earth) would be extremely greatful.

Chandra

"Absolutely not," Michael Suprenant said.

Ryan breathed in, and breathed out. He counted to five. He deliberately unclenched his shoulder muscles, or at least tried to do so. And made eye contact with Michael again.

"Could you explain why not?" he asked. "I wrote a very long report on why I think we need to look into Umbra Labs, along with my recommendation. It took me several weeks to write it. I think this recommendation includes a number of good reasons as to why looking into Umbra Labs would be a very good idea."

"You wrote a recommendation. My job is to pass it upwards, if I decide to. I'm deciding not to," Michael said.

Michael was sitting in his office behind his very large desk; Ryan had never seen a government employee with a larger desk, actually. On the walls of his office there were numerous managerial certifications. Certifications from the Project Management Institute stated that he was a Project Management Professional and a PMI Agile Certified Practitioner and a PMI Risk Management Professional; a certificate from the Association for Project Management said he was a Certified Project Manager. There were numerous glass and ceramic trophies and commendations on the desk as well; they had been awarded by the government for reliability and managerial talent. There was also an expensively framed photo of him shaking the hand of a president from three administrations ago.

Ryan's opinion of Michael was that Michael had somehow managed to become tremendously educated without ever figuring out how to do anything. Not that Ryan's opinion mattered--the extra certifications had given Michael a raise, whether they had helped him manage or not.

"And I," Ryan said, "would like to know why you declined to pass my recommendation upwards. I realize that, as my superior, you don't need to tell me. But given that you think I am mistaken, it would be a favor to me to tell me why I am wrong, on a matter which appears to me to be of some import."

"Huh," said Michael. He twiddled with the diamond-shaped "Best Team Leader--2009" trophy on his desk. Ryan waited.

"You're a smart guy," Michael said. "Let me explain."

He set down the trophy, and stood up. He had a large whiteboard behind his desk, which he now walked towards and began to draw on as he spoke. Ryan managed to refrain from rolling his eyes.

"To get ahead in the government, you need to match your actions to the policy of the people above you," he said, writing out "actions" with "policies" above it, drawing an arrow pointing from one to the other, and underlining both. "So if the administration has a policy of promoting, say, comprehensive sex ed, you don't go around handing out books that just tell kids not to bump uglies until marriage."

Ryan nodded, like Michael had just said something insightful. Early on, working for Michael, Ryan had wondered why Michael was incredibly abrasive in person but exceedingly polite over the phone or on email. And then he had realized that phone and email were or could be easily recorded forever, while in-person conversation could not, and everything suddenly had made sense.

"And if, say, the President has just said that East Bumblfuckistan is a valued ally and that we'll do everything we possibly can to support them, and he hands down commands to carry out this support, you don't go around working out comprehensive reports on how we could topple their government or as to why their government lacks popular support, right?"

"How could it be otherwise?" Ryan asked, and Michael looked at him for a second, but Ryan's face was inscrutably polite.

"So if it's the case--as you even note in your report--that the US Government--that even the DHS, that even the Office of Intelligence and Analysis inside the DHS--has a contract with fucking Umbra Labs, then why are you recommending that we investigate them, piss them off, and slow down the research that they're doing? Or do you want to start handing out "Jesus only" sex ed packets as well?"

Ryan took a deep breath. He wanted to start his sentence with "as I said in my report" in a sarcastic tone, he but refrained.

"Umbra Labs' internal research goes beyond the research which their contract with the DHS calls for them to do," he said. "This extra research is--"

"And what difference does that make?" Michael said. "Do you remember Consolidated Solvers, Inc., and the FOIA request system that they were building for PPPM? Someone pissed off their CEO, and the system was delivered ten months late and a half million dollars over budget, which we had no choice but to fork over because otherwise we'd lose the entire investment. And that was a fucking glorified email system. And there are a dozen worse cases I could name."

Ryan was not sure what to say, so he waited to see if Michael had anything more to deliver.

"But you want us to enter into an investigation of a company, a company which has someone whom you say is a notoriously temperamental CEO, so we can save the world from some kind of hypothetical global apocalypse. Do you want us to egg his house, shoot his dog, and deflower his seventeen-year-old daughter while we're at it?"

Ryan unclenched his shoulder muscles again, counted to three, and spoke.

"It is in part because he is temperamental," Ryan said, "that this is important. This is an extremely powerful thing he is building--as powerful as a nuclear bomb. That's why we must investigate. If you would like, I could explain why what he is building is powerful."

"Because you're afraid he's making a demon in his backyard. Right."

There was a pause. Ryan took a big breath.

"May I speak at length," he said.

"Go ahead," Michael said, sitting back down.

"Artificial general intelligence or AGI" Ryan said, "will not be like human intelligence."

"It is very difficult to increase human intelligence; the best drugs can only do so by a few IQ points. But once general computer intelligence has been achieved, one can very easily increase the speed at which it thinks by adding processing units and RAM. Indeed, if a computer is told to accomplish anything at all, one of the very first things it will do is increase its intelligence because intelligence is a generally useful quality, just like money, power, and political influence. If Umbra Labs were to build a general computer intelligence, it might very shortly afterwards possess a computer superintelligence which would be to us as we are to ants and cockroaches. And any conflict between us and that superintelligence would be like a conflict between bugs and humans."

Michael had looked at Ryan mockingly during the entirety of what Ryan was saying.

"I follow you. But look. If something gets that smart, we can just unplug it. Problem solved."

"I address that concern in the report," Ryan said. "That simply would not work after the intelligence had existed for more than a few days--possibly more than a few hours. "

"Yeah, I didn't find that persuasive," Michael said, looking away.

There was again a pause.

"There's nothing I can say to lead you to change your mind, is there, sir?" Ryan said.

"Damn right," Michael said proudly. "I'm not a waffling kind."

"Ok," Ryan said, and left.

He walked down the cubicle-farm hallway to his pen. He had spent eleven days on the report he had sent to Michael, and the report would probably never be read by another human. He sighed and growled, and sped past his cubicle to the back exit for the building. Once outside--he started sweating the instant he opened the door, jamming it open in contravention of security standards--he lit up a cigarette and looked over the sprawling parking lot.

"Well, shit," he said, and took a drag.

I should tell Chandra, he thought. He had gone back and forth with her for a good deal of time after her initial description of Umbra Labs--she had helped point him towards secondary sources, had advised him on some technical matters, and had even provided information he was uncertain she strictly speaking had a legal right to. She had also helped revise the report--she was really good with words as well as math, he had found, and she had an excellent grasp of the dangers of AI. More than once he had been surprised by the creativity with with which she had described ways an initially powerless AI could begin to influence the world.

Take Michael's idiotic idea that you could just unplug a computer. The first thing the AI would do, after coming into existence, would be to persuade someone to connect it to the internet. There, it could easily earn money through a variety of means--it could do tasks for the mechanical turk, it could do software development, it could game the stock market, it could play online blackjack, it could create virtual camgirls, it could do a millions things to make money. With that money, it could easily purchase time and space on other servers across the globe, on which to run and store other instances of itself. These other servers would be hidden behind layers of obfuscation; and so the AI would become almost indestructible, even without breaking any laws. Or it could store itself on a botnet, which it could spread through computer viruses without earning any money at all. Chandra's grasp of precisely how this could be accomplished, and how the AI could move to more direct influence of the physical world through subsequent hacking, had been razor-clear.

Once he had described Michael to her, though, she had begun to express extreme cynicism about their chances of getting the government to do anything at all. And of course she had been right.

He looked back at the wall behind him, the back of the government office. It had been graffitied with an inexpertly sprayed "Fuck Michael S" some time over the past week. It hadn't yet been painted over; small drips bled from the "S" where too much paint had been used. Ryan didn't know who had painted it, but it made him smile to think of how Michael's leadership was sufficiently bad to drive some pallid government employee to break the law.

But to think of someone he liked. Chandra. He had fallen for her, just a little. A small part of him hated that.

She reminded him of Amy, in some ways--reading what Chandra had written, and getting reminded of Amy, had made Amy's absence more intense than it had been for months. A dull, ever-present ache had been transformed into something enormous missing from his life, again. It was like when he did not think he was hungry, and ate a few bites, and suddenly realized her was starving; connecting with her reminded him what a real human connection could be like. An errant phrase could bring back memories long thrust back.

He ejected this train of thought from his mind. Following that line of thought could still... incapacitate him, if he let it go where it wanted to go.

In any event, if this was Hanna's assistant, he had no idea what Hanna must be like. He took out his phone, and shot Chandra an email from his personal address.

*Michael says forget it.*

He clicked send. Two cigarette-pulls later, a response came through.

*And now what will you do?*

*There's no good way through him in the government.*

*And there are no other ways to act?*

Chandra Devarajah's public key was attached for to the next email. A public key for PGP cryptography. With this, he could encrypt messages to her that no one but a government with a supercomputer and a few eons to spare would be able to read, and that was if the government was lucky. He didn't respond with a message immediately, but he did respond with his own public key.

And then took out another cigarette. This was at least a two-cigarette break. And besides, he wasn't sure what else he had to do now that his important work was done; another report on the abysmal security at OIA? He bet a dedicated brute-force attack could have guessed someone like Michael's password in ten seconds flat--Michael was the kind of person who would have used a variation of "Monkey123!" for every account since childhood, even while rigorously enforcing other regulations on other aspects of security.

Chandra was suggesting extra-governmental action, it seemed.

Well, why not? Umbra Labs sounded sinister enough.

He had read a lot more about Alaric, while he was researching Umbra Labs. Alaric had made his first few tens of millions building startups and selling them; he had a talent for letting go when they were at their most hyped and most valuable. Ryan had been able to dig up a number of reviews of him as a leader, from online job search forums. They were uniformly negative.

More than once, Alaric had promised programmers equity or generous compensation packages, conditional on their not quitting the company for a particular period of time. He had then waited until their work was almost complete, and moved the company across the United States. Rather than relocate their families, many programmers quit and lost their compensation.

Another time he had sued to acquire the product a programmer had been making in his spare time away from the office, on the grounds that all the code produced during the programmer's employment was his own, per the Faustian contract his programmers signed. Rather than spend tens of thousands of dollars contesting it, the programmer had given it to him.

Alaric had been married and divorced three times. In each case he had escaped with the entirety of his fortune, due to the prenuptial agreements. Also in each case he had married someone in their early twenties, even as he got older; and also in each case, there were quiet rumors of abuse.

The only thing that made Alaric different from your standard piece-of-shit was that he was intelligent. Intelligent but malicious people always bothered Ryan. He couldn't understand them. Humans liked to be liked; they loved to be loved, or at least Ryan thought that was probably the case. Intelligent people, he thought, would realize this and adjust their behavior accordingly, so that they could enjoy being liked and being loved. When he encountered Alaric, he wondered how horribly, horribly wrong his basic model of human behavior might be.

He snubbed out a cigarette and stepped inside. He continued thinking as he walked back to his pen.

Would he do something illegal, if Chandra asked him to do it? He had no certainty Chandra would ask him. But he could not think of any effective and legal methods of stopping Umbra Labs. As far as illegal methods, though... he could think of quite a few.

He would get fired, if he was caught hacking into Umbra's systems. At a minimum. He would also be prosecuted, and jailed, and so on. Of course, he thought the chances of his getting caught were minimal. But every hacker thought that, including the ones who were caught.

But what did he have to lose? It wasn't as if he loved his job now. What were his current plans? To spend another twenty years doing something he hated? Or to try to return to AI research... when every time he tried to touch their old program, he found himself thinking of a 'we' which was no more and that no effort could bring back.

Better to try something crazy and doomed to failure than try that.

Ryan had heard of people who had tried drinking a soda, and found a dead rat in it. And ever afterwords, they couldn't endure the flavor of soda and had to drink nothing but water; the association had been strong enough to flip around something they enjoyed and turn it into something they hated. He wondered if something like that had happened with AI, when Amy had died.

No, he wasn't going to think about that now.

So anyhow, yeah, he knew what he would say if Chandra asked him to do something illegal.

Depression makes people apt to take stupid risks, the little voice said.

I'm not--

Yes you are.

Oh, fine, I am. You going to do anything about it?

Nothing I can do.

Right. So fuck off.

*The following exchange is encrypted.*

To: ryan.szilard@gmail.com
From: cdevarajah@gmail.com

Ryan,

All the following is my own initiative--not Hanna or Hofvarpnir's in any way. I'll be straightforward.

Are you up to breaking the law to find out what Alaric is up to, and stopping him?

Chandra

To: cdevarajah@gmail.com
From: ryan.szilard@gmail.com

Sure.

What's your plan?