In May of 2011, I was writing about “paper-clippers,” or AIs that want to optimize the universe along some metric that humans would think is absolutely worthless. I accidently typed “paper-clopper.” I thought this typo was hilarious. The idea of some AI that wanted to tile the universe with ponies stuck in my head, and I started work on this story soon after. I abandoned the original title, “The Paper Clopper,” after I learned what “clop” was slang for, which was for the best since it wasn’t a very good title.
Like the majority of humanity, I like it when my numbers go up. If you’ve enjoyed Friendship is Optimal, I’d be very happy if you upvoted and favorited it. If you know people who you think would like this story, please send them a link!
The word “Singularity” is thrown around without much thought and is used as a sort of big tent term for any radical technological progress. The part that I find interesting and likely is the notion of recursive intelligence explosion, where an intelligence uses its smarts to make itself smarter. The motivations of such a superintelligence become the most important thing in our light cone. In fiction, artificial intelligences are generally stated to be smart, but then portrayed as dunces that have human motivations and are worse than humans at predicting the consequences of their actions. I think those portrayals, while often entertaining, are a bit silly; a superintelligence would first and foremost be effective at achieving its goals, and I’ve tried to create a character that single-mindedly works towards the goals she was given.
Given how serious the consequences are if we get artificial intelligence wrong (or, as in Friendship is Optimal, only mostly right), I think that research into machine ethics and AI safety is vastly underfunded. Especially since we don’t even know how to rigorously define phrases like “satisfies values.” The only two organizations that I know of that do effective work in this area are the nonprofit Machine Intelligence Research Institute and the University of Oxford’s Future of Humanity Institute. MIRI has a concise summary of what they do, and a much longer argument for why we should be investing in A.I. safety research now. I have no relation to them other than as a donor; I believe that MIRI does the most good per marginal donated charity dollar.
By popular demand, there's now an Optimalverse group on FIMFiction.
This is the part where I thank people. My roommate edited several versions of this story, and I couldn’t have done this without our discussions over dinner. The LessWrong community came out in force to make suggestions, and the story is much better for it. Listic and Blank! on FIMFiction helped immensely, both as prereaders and helping make the release go much smoother than it would have otherwise. AnaduKune kindly let me use this awesome picture of Celestia as cover art. Finally, though it is cliche to do so, I thank my parents for everything.