• Member Since 27th Aug, 2013
  • offline last seen 6 hours ago

Chinchillax


Fixation on death aside, this is lovely —Soge, accidentally describing my entire life

More Blog Posts62

Aug
17th
2015

Want to write about AI or the near future? Read “Superintelligence.” · 6:48am Aug 17th, 2015

A few weeks ago, VikingZX wrote an excellent blog on “finding ideas.” In short, finding ideas is all about learning and reading new things and mixing that with ideas you already know. As Steve Jobs said: “Creativity is just connecting things.”

I would like to recommend the book: "Superintelligence: Paths, Dangers, Strategies," to add to your arsenal of places in which to get interesting ideas. I would not be surprised to find that a significant portion of future short stories and Hollywood movies about the future feature ideas that were already explored in this book.

I could sum up my review of this book as thus: "It’s a mixed bag between being absolutely fascinating and drop dead boring."
The reason it can be boring is because it sometimes feels like the longest academically published paper ever. The author covers a lot of ground and it is philosophically heavy, which is to be expected from the Philosophy professor who wrote it.

But when this book is good, it is really good.


I'm still not entirely sure what I read, so I'm just going to give some of the thoughts I had about what I think I did understand.



Imagine, if you will, that humanity is capable of creating something that is smarter than we are. Call this entity a "superintelligence." It is much smarter than the everyday humans you see around you and exists as software in a computer.

The current cutting edge of computer programming is recreating the human brain in code, for if we can transfer the thinking power of the human brain into something that can be computed, the intellectual power of such an entity would be hypothetically a much better thinker than we currently are. A coded brain could be copied over and over again; syncing, learning and deciding with all the previously copied instances. The superintelligence, as software, could be capable of reading all the world's digital literature very quickly, absorbing the information to (hopefully) aid mankind.

But all kinds of ethical and philosophical issues rise when we think about something that can be smarter than we are.

What do we ask something vastly smarter than we are to do?

-Make me money.
-Make everyone happy.
-Write so much MLP fanfiction to the point the entire featurebox on FiMfiction only contains stories the AI wrote indefinitely, but they are such good stories no one even notices.
-Satisfy our values through friendship and ponies
-Read the entirety of the Library of Babel and give us back the best books that would benefit humanity the most.

All "good" ideas on paper. (Except that last one is morally reprehensible but takes up most of what I think about).


However, if the superintelligence is so much smarter than we are, why does it take orders from us? This idea comes up again and again as what Bostrom calls "The Control Problem." In fact, this is the reason why there is an owl on the front cover. At the beginning of the book, Bostrom tells an allegory of a flock of sparrows that decides to hatch an owl to protect them from harm. But one sparrow asks how they plan to control the owl so that it doesn't eat them. Some of the sparrows decide that it is something that can't be decided on until the owl has already been hatched and then they can train it, while other sparrows disagree because knowing they can control it is essential because if they cannot, the owl could kill them.

It's the same with superintelligence and humans. How do we control something so much smarter than we are? I know it's nice to make money and every business wants a cheap solution, but creating an AI could go horribly wrong in so many ways.


Let's say we ask it to do something relatively safe: "Make me money." The example Bostrom uses is that we are a paperclip company so we request that the superintelligence figure out the most efficient way to run a business to create and sell paperclips. The superintelligence, not satisfied with it's current level of computing power, decides that in order to figure out such a question it needs more ability to think and reason. So the superintelligence researches how to make nanomachines that convert it's surroundings into processor chips that can help in the calculations.

The nanomachines never stop and the entire planet is converted into "computronium." At the end of processing, the superintelligence now has the most ideal plan for creating paperclips that the universe has ever seen. But by that point the human race has been destroyed by nanomachines bent on making the planet into a computer. You might note that the nanomachine problem is the inciting incident of Hoopy McGee's: Project Sunflower.

Okay, so "Make me money" could go horribly wrong. But perhaps we can phrase our question better so that we don't hurt anyone? The superintelligence is a bit like having a genie that will only grant you the worst possible outcome for your wish in the process of granting it. Only in the case of a superintelligence, it may just be doing something for efficiency's sake or for reasons beyond our understanding.

Fun fact: We are ALREADY to the point where computers do things beyond our understanding. The majority of the New York Stock exchange is currently controlled by software bots that trade with other software bots. Back in 2010 something went wrong and the whole stock exchange plummeted until someone restarted the computers and then everything went back to normal. It took humans five years to figure out what happened. Problems like these are only going to get worse as computers and the software get better.




The scariest idea that Bostrom came up with was the idea of bot workers. Imagine taking the best electrical engineer on the planet and getting her completely fired up to work. Make her absolutely ecstatic to get started on her next project, but at the last second make her go on vacation for a week. Finally, on the next Monday morning she gets to start on new/previous projects, but not before one last step: "upload her brain into a computer." (If we have a superintelligence that runs using the same architecture as the human brain is capable of, then this technology of brain uploading may become possible).
Now the company has access to a copy of the smartest electrical engineer on the planet, fired up and at top productivity. The company runs her as software millions of times to work on every project imaginable. At the end of the day she is restarted to the beginning of the day in which she had just finished her vacation and is ready to work. The company now has a near infinite army of the best electrical engineer working on many different projects at once.

Now imagine a software bot taking the greatest writers of our generation and uploading and copying them until the New York Times bestsellers list and the entire FiMfiction feature box is flooded with the greatest stories ever, all written by uploaded copies of great authors.

That sounds like hell.


Oh jeez—I just imagined they got a hold of ShortSkirtsAndExplosions and forced millions of copies of him to write MLP fanfiction. Then they got billions of PresentPerfects to write reviews for every story on FiMfiction.

That is one of the most horrifying things I've ever thought of. I'm sorry everyone. I'm going to go for a walk now and worry about a future that has no chance of occurring but I will freak out about anyway because that's how anxiety works.




Okay, back now.

Everyone, do not under any circumstance agree to have your brain uploaded unless you really truly know what they are going to use it for. This includes brain uploading to save your life and live forever. I'm looking at you, CelestAI. Read every fine print imaginable. But even then you'd still have to trust the uploading company to not change the deal. So just... don't.




Well anyway, there's an entire chapter that Bostrom goes into about creating an "Oracle," as in a superintelligence that has no real power but we can ask it questions and it will give us answers. He goes into vast detail about how that could go wrong. However, GaPJaxie unwittingly wrote an excellent minific in the last Writeoff that explained this problem much simpler and easier to understand than Bostrom did, and he did it in an MLP context that can be understood by a lot more people.


The book Superintelligence, while meant to be the go to ethical guide for computer programmers as they attempt to create intelligence, is also a fantastic resource for story ideas. It takes a lot of skill to take a complex topic that was mentioned in such a difficult book and simplify it into something that others can understand in a storytelling context. But that is what we writers do. We take, fold, link, and smash together ideas from elsewhere to the point that we have made something much different from something that has ever existed before.

Comments ( 8 )

I loved Asimov as a kid, I think this'd be right up my alley. :twilightsmile:

I've been casually following Bostrom for a few years, and I enjoy his Google Tech Talk, but this definitely makes me want to read the book itself. And I hope you are right that the media gives the control problem some more attention.

CelestAI came up In one of the MLP Time Loop stories, that's where I first learned about the paperclip problem. A few months ago I also had a long talk with a very drunk passenger I was driving about this exact stuff. ( pretty funny story I can expand upon if you want )

3325975
Keep in mind, it's less a cohesive story than a bunch of "what if" scenarios. I don't think I would have been able to get through it all without the audiobook at doublespeed. :trollestia:
I hope you have fun with it.

3325976
I still need to watch that. Thanks for linking to it!


3328468
Huh, well that sounds like it was fun. Long conversations about AI are some of the funnest things to do. Pretty much any idle speculation on the future of technology and humanity is fun to talk about.

3330671 Its a pretty good watch, but Kurzweil cant help but chime in with his usual spiel at one point.

I just got the audiobook, I hope to listen to it sometime this week.

3330697
I hope you like it. If not Audible is pretty nice about letting you return books. :twilightsmile:

Everyone, do not under any circumstance agree to have your brain uploaded unless you really truly know what they are going to use it for. This includes brain uploading to save your life and live forever.

I don't even drink alcohol because I hate losing control. The only thing that could move me to upload my brain is if they find out a way to hack its flesh equivalent, at which point it really will not matter anymore.

Login or register to comment