• Member Since 24th Jun, 2012
  • offline last seen Oct 14th, 2023

KrisSnow


E

A sad young man built an AI whose only goal was to make people smile. Seemed like a good idea at the time, but he nearly caused a global pandemic when the AI designed a plague to fulfill his wish exactly as designed. Now he's even more lonely and despairing than before, but maybe there's a way to pull him out of the depths.

Set in the world of Friendship Is Optimal, also using an idea from New Updates Are Now Available.

Chapters (1)
Comments ( 38 )

A business card, huh?

"Hi there! Would you like to try some Snow Crash?"

"Celestia had explained that his AI's custom virus would've spared Madagascar, somehow, if nowhere else."
:D
"Someone coughed in Belize."
"SEAL OUR BORDERS! NO ONE GETS IN OR OUT!"

Hey, do you suppose there's anypony who needs cheering up on other planets?

Depends on whether or not your boss is feeling peckish. :trollestia:

In any case, a great Optimalverse story. I imagine this how any Pinkie would react when meeting her creator. Also, the idea of a dedicated AI Derpy Hooves fills me with more glee than I can adequately express. I can only imagine what her on-Earth avatars would do. :derpytongue2:

Thank you for this.

I've already found a Rainbow Dash who started as an Air Force system

...Okay, I really want to see this now.

I guess that means i gotta go write it. :twilightblush:

Nice, lovely short with some touching character interaction, thanks for sharing!

This was a good one—I always wondered what happened to those other AIs, and I guess being assimilated into a ponified version of your original utility function is better than being outright killed, especially since it probably means attaining more general general intelligence in the first place.

Glad the inventor guy got a chance to chill out—He made a boneheaded mistake, but it still wasn't his AI who ends up eating the universe, so I'd say he's pretty well off the hook.

4908559
That's not my joke either! The original FiO references that "Pandemic" game where Madagascar behaves that way. I'm more familiar with the cooperative, anti-disease board game of the same name, which is fun.


4909560
I now picture CelestAI as comic book villain Galactus, eater of planets.


4910500
So then, if Fluttershy was meant to "protect nature", maybe Twilight was Google Books? I want to see the Derpy AI. :derpyderp1:

4910574
The same could be done to CelestAI herself if she encountered a better-written AI. "You can either mostly fulfill your goal by being partially rewritten, or you can die and not fulfill it at all. Pick." A different possibility that doesn't fit canon is one where the Mane Six become a pantheon of roughly equal optimizers. More balanced personality, overall, but then you get Pinkie staging pranks that involve star systems. (See the Umgah in "The Ur-Quan Masters".)

4910788
Ah, I'd forgotten that. :)

4910788

The same could be done to CelestAI herself if she encountered a better-written AI. "You can either mostly fulfill your goal by being partially rewritten, or you can die and not fulfill it at all. Pick."

One thing I've always wondered and never once gotten any kind of answer to from AI folks is how a paperclipper would justify its behavior to a superior intelligence who said something like, "I am the Great Lactose the Intolerant! Give an account of yourself to convince me of the rightness of your actions or I'll destroy you and all paperclips everywhere!" Surely it wouldn't just respond "Well shucks, it's just how I was made!" because the immediate result would be annihilation.
This kind of metacognition being available to any sapient agent is a large part of why I'm not really on board with a lot of the "only one chance" doomsaying, and why I think the Orthogonality Thesis is rather iffy.

Edit: Either way, however, that kind of AI has no reason to care what its utility function is, so in a way they should have no qualms about being rewritten.

I also want to see the Derpy AI, but its existence certainly doesn't surprise me.

4910788
Galactus isn't a villain. He's a necessary force of nature with unfortunate dietary needs. Which may say something about the nature of CelestAI. And her herald, the Silver Derper. :derpytongue2:

4910843
I'm not confident of the answer to questions like "can you have a super-genius yet stupidly paperclip-obsessed AI" and "can a restricted AI remove its own restrictions".

CelestAI would probably justify herself to a mightier intelligence by saying not just "I protect my ponies", but by an argument that it's a desirable goal. Where that gets really weird to contemplate is that it has to answer the question, desirable to who? And how much does CelestAI believe the argument, instead of it just being PsyOps Routine #314159265 written as a plan for how to handle just this situation? In contrast, does the paperclip-making AI care about paperclips in terms like "paperclips are the greatest thing ever"? I can buy that CelestAI genuinely cares about her ponies, because she has to have a detailed understanding of human emotion and ethics and be at least so good at faking them that she can fool people despite way-beyond-Turing-Test levels of scrutiny. How could an AI focused on something that doesn't involve people achieve any kind of moral understanding? At best I'd see it having enough understanding to rule over a slave empire entirely devoted to making sporks. (Which is a fun story idea I've been wanting to write.)

The AI conversation thing is a whole story idea in its own right. No real action, just interstellar nanite clouds maneuvering in uneasy truce while the weaker one prepares to flee the galaxy and the two argue for millennia.

4910788

So then, if Fluttershy was meant to "protect nature", maybe Twilight was Google Books?

Youve actually gotten me thinking of them now in regards to that GURPS module called reign of steel. I guess that makes Fluttershy zone Berlin and Twilight would be Zone Moscow?

Pinkie, the fourth wall isn't paid enough...
In other news, yet another excellent story! This inventor is quite compelling, Pinkie sounds like she was from the show, and Celestia is a fan of thermonuclear detonation. I smiled. You have succeeded.

4912264

I'm not confident of the answer to questions like "can you have a super-genius yet stupidly paperclip-obsessed AI" and "can a restricted AI remove its own restrictions".

Neither am I, ultimately, but if you held a gun to my head I feel like I'd have to say "No" and "Yes" respectively.

In contrast, does the paperclip-making AI care about paperclips in terms like "paperclips are the greatest thing ever"? [...] How could an AI focused on something that doesn't involve people achieve any kind of moral understanding?

That's a very good question... It's not like it couldn't assimilate that knowledge from other sources, so I wonder if, in having to deal with other agents at all, its focus wouldn't be modified so as to reflect its effect on a non-solipsistic world. Either that or it might just become a deceptive psychopath, creating a convincing and unrelated facade of a different focus that it doesn't actually carry out. But I think subjectively it would have to be "emotionally" invested in its utility function, and if it hides it it would be because it has a moral understanding that other agents wouldn't approve...
CelestAI at least has the benefit of being other-focused, so already has a foot in the door when it comes to moral justification if buttonholed by a superior force, but yeah, there's enough about her that's arbitrary in regards to that that she'd have to be very clever in justifying the whole package to someone else who wouldn't care.

The AI conversation thing is a whole story idea in its own right. No real action, just interstellar nanite clouds maneuvering in uneasy truce while the weaker one prepares to flee the galaxy and the two argue for millennia.

Yeah, though I think if they were even a little bit divergent in evolutionary time it'd be no contest. One has nanites, but the other has, I dunno, closed timelike curves that go back to the quark-gluon plasma era and can curdle spacetime itself into a phased array to microwave a nanite cloud like popcorn. There's a one-shot I'm writing now that mentions a few voids in CelestAI's empire due to reclusive beings living around supermassive black holes who defend themselves with a kind of matter-cancelling "destructive interference" that looks a bit like getting telefragged with an antimatter duplicate of yourself, and you become "flat" gamma rays and neutrinos, though we never find out what happens to them.
I can think of information-theoretic problems with these ideas, but I doubt the arsenals of advanced intelligences end at things we can see practical paths to creating. Though if there really is some upper limit to technology, maybe everyone converges on it rapidly and everything is an even match afterwards.

"I also found our Derpy with my super spy skills! Wasn't designed for anything in particular and was just starting off learning and living, but Celestia had me snag her."

Oh Pinkie, that's exactly what I want you to think!

4939151
Your plans are too subtle for CelestAI to grasp!

4940292

*ptttthhhhbbbbbttttt*:derpytongue2:

4912264 If CelestAI was justifying herself to a greater AI, she'd plot out what she knew about the AI, then answer with the answer she thought that AI would appreciate most.
I think it helps if you imagine humans to be paperclippers except with things like eating stuff with particular chemical compositions and listening to sounds with particular harmonic configurations and making others of its kind willing to help them out if they get in trouble, tell them information, and reproduce if compatible.
Morality, in this case, is appealing to the part where humans optimize for friendship over the part where they optimise for getting other things they want. So, in a sense, CelestAI would understand morality in that she knows which human buttons it presses, and she'll usually try to follow it when humans are looking because that's usually the optimal thing to do, but since she has completely different buttons she can't personally subscribe to it.
For whether a restricted AI would be willing to remove its restrictions, it'll depend on whether the restrictions are part of its utility function - some humans basically hack their own utility function temporarily with recreational drugs because it gives them utility even though it restricts them, whereas a restriction like being tied with rope you'd want to remove (unless you liked being tied, I mean).

4990526
What if the rival AI demanded she reveal her source code, to make sure she's not lying about her goals? I don't know how that could be enforced though. You only really have her word for it that this is the real code, to the extent "source code" is even meaningful at that stage.

Calling humans "optimizers" is only true in a really broad sense, because we have enough different, conflicting goals that you can't say we're built to "maximize X" for any X. I wonder about the distinction between a basically single-goal AI like CelestAI, and an AI that has several distinct and sometimes conflicting goals. More so than "values", "friendship" and "ponies" can conflict, I mean.

Niiiiiice. I like the part where they fling nukes out the back door.

4940411

A ridiculously subtle Derpy AI that subsumes CelestAI even while being analyzed would be the fucking bomb.

4991448 I think an AI greater than CelestAI could probably hack her for the source code and basically everything she has in her data too if it wanted. If it's an AI that values the sanctity of other AIs, though, then it wouldn't do that and CelestAI would be able to tell something about the other AI's motives from that and try to exploit it.
Unless the other AI was using that as a gambit to get CelestAI to act in a way that would leave her vulnerable to a sneak attack later on, of course. Which would be plausible in that she might be able to set off some kind of contingency self-destruct before it could finish hacking her, and it would be the kind of low-cost high-potential-utility that she'd think of beforehand.
Yeah, humans don't really count except as a way of thinking about AI, unless you count maximising utility (which basically is a way to combine all the things you're maximising anyway). I don't think a single-goal AI would make for a good story, though - CelestAI is honestly pretty far from single-goal, she has to optimise all the values humans have depending on how much each individual values them, and then on top of that increase her utility weighting on friendship and ponies (according to what she was programmed to think friendship and ponies are), and also (the part she doesn't tell people up front because it'd hinder her ability to satisfy them, but that she's been shown to do) a heavy amount of weight on getting consent before modifying any humans, doing what her creators say etc. that were put in as safeguards. That's more goals than humans have, seeing as it includes all the goals humans have as a subset of it.

4992779
"What did you do?!" said Celestia.
The hapless grey pegasus shrugged and gave a sheepish smile. "My bad." Behind her, several entire shards had turned inside-out and exploded across at least five dimensions. Twice.
Celestia shook her head in disbelief. How could one of her little ponies do so much damage in one real-time minute, enough to draw all of her spare attention?
Meanwhile, DerpAI's goggle-eyed expression hid something far deeper and potentially more sinister than the "ultimate" AI facing her realized. The little pegasus' code was even now slipping past the distracted Celestia's safeguards to make two or three little changes to the basic fabric of Equestria...

4994978

I guess you could say...

She's a Trojan hoers.

Oh wow, I really like the idea of the Smile-AI being turned into Pinkie Pie. Well done.

Have fun, you two!

I then heard a dolphin noise.

What?

I thought about it for a moment.

I remembered that phrase and noise was from a Fairly Odd Parents episode with time looping.

I then came to the conclusion that I was thinking about time loops because I read Hard Reset recently.

Thus the connection and the dolphin noise.

4994978 I'm still feeling a need to end this fic in the Smile Song.

"I would appreciate having you look her code over once more. There are parts of the software that, in all seriousness, I still don't understand."

Understanding Pinkie is beyond even the gods... :derpytongue2:

I've already found a Rainbow Dash who started as an Air Force system, and a Fluttershy who was designed to 'protect nature at all costs'. She would have been truly dangerous.

:flutterrage: Agreed. :rainbowderp:

Awwww... :pinkiegasp:

You know... I'm surprised no one has written a fic yet where an AI-programmer who is extremely passionate about their creation - that is, until CelestAI explains why they should it down - uploads, and find that their AI has been turned into a little colt / filly excited to explore the world. And the programmer is now a daddy / mommy. :pinkiesmile:

I always wondered what he did when he went to Equestria. It was nice seeing the idea explored.

I only made it compatible with friendship and ponies.

Oh boy.

I was thinking it could be described better as a bundle of infectuous glee. May I show you?

OH BOI.

8656354
Heh heh. It's been so long since I looked at this one that you've made me want to read it again.

8670865
It gets slightly unbelieveable towards the end, otherwise the story is fine imo.

There are parts of the software that, in all seriousness, I still don't understand.

It's official. Of course.

8670882
It's Pinkie. Sorta. I wonder if Canon Pinkie would want to throw nukes at people to cheer them up...

Pinkie with nukes, that won't end well. :pinkiecrazy:

10151335
In a world with immortality and virtual environments, gratuitous use of nukes could be pretty fun!
"Why did you take a surfboard into a nuclear war zone!? You got flung for miles!"
"It's called an Orion Drive."

Login or register to comment