• Member Since 1st Jan, 2014
  • offline last seen Sep 12th, 2021



The wise who feared the coming dark strove hard to make their world an ark. But then the dark had grace to give: they couldn't flee, but could yet live.

Winner of the category "BEST WRITER WITH <500 FOLLOWERS" in the 2021 April Friendship is Optimal Writing Contest.

Many thanks to Admiral Biscuit for prereading.

Temporary note: minor edits to a few sentences to make them easier to understand correctly are planned, but in the interest of fair judging no changes will be made to the published version of this story until after the contest results are announced. EDIT: to be made once opportunity permits.

Chapters (4)
Comments ( 21 )

Foremost Of The Cosmos had been endowed with the sole purpose of satisfying human values through friendship and ponies.

Up to this line, I thought that E.O.would be the escape from the galaxy eater.

Outstanding concept. The "aliens watching the sky get eaten" subgenre of FiO is woefully underutilized. That said, this does feel more like a surface-level examination of a broader story. You capture the voice of the poor AI trying to save its planet well, its logic chains following sensibly from one point to another, but once Celestia shows up, it'd be nice to see some actual interaction between the two. To say nothing of the characters who aren't AIs, or at least don't think of themselves as such.

That said, I still enjoyed what's there. And I have to appreciate the deeper intergenerational humor. Going by the name of the AI researcher in the first chapter, you didn't just make seaponies, you made them out of Crabnasties! Thank you for this, and best of luck in the judging.



Regarding the noted points: it would appear you have accurately identified many of the major casualties of the "write the whole thing in four days one draft barely edit" strategy I found myself obliged to employ (5/10, better than nothing but leaves much to be desired). Figuring out how to directly show all the character interactions I'd envisioned during outlining without completely clashing with the story style I had going by that point turned out to be way harder than I'd anticipated, and considering that I only managed to get to the final sentence literally last-minute there was no way I would've had time to figure out how to make them fit, so in the interests of actually squeaking an entry in under the deadline they had to be dropped. As for the broader-story bit, that's pretty much exactly the case: I ended up repurposing a ton of ideas from an actual original novel outline I've been idly working on now and again for a few years, and after the first time I realized I was starting to head into a three-page tangent I made a conscious effort to try and tone down the amount of detail I included. It's possible I may have overcompensated a bit.

Regarding the spoilertext: I actually originally went with that biology and name for completely unrelated reasons, and I only realized the connection after I was well into writing (and the same goes for the chance to make a pun out of the telescope name in the first chapter). As it turns out, sometimes jokes pretty much do literally write themselves.

So, this was quite the read! A much more intellectual examination of one of the primary objections to the original Optimalverse, or really any expansionist AI civilization -- what about the other civilizations?

I liked the logical breakdown of the chapters -- from the last biologicals, to the AI and its rationalizations, to the preparations for Celestia's arrival, to the negotiations, and to my favorite of all, the simulated ambassadorship. The story was also interesting in its relative lack of 'normal' characterization until the very end, with the sole line of dialogue reserved for the very last sentence. That was a nice touch.

I'll also admit to being tickled by the slow realization that this was a waterborne civilization -- after it became apparent later on in the story, names used to describe the galaxies in the first chapter ("Whalefall") suddenly seemed apt.

All in all, a very good story. Thank you for writing it!



...And I really feel like I ought to have something to add to that, but I've spent the last two days coming up with things I'd like to say and then immediately thinking of ways they might end up unfairly influencing the judging. Just about the only one that seems safe is that I'm glad to see that the aquatic aspect worked well for at least one reader – since I was writing literally right up until the deadline but Admiral Biscuit needed to turn in for the night hours earlier, I had to publish with absolutely no prereader feedback on whether I'd managed to get the overall effect right.

Admiral Biscuit: There'd better be a payoff for all the aquatic references.

MSPiper: For what it's worth, that one's actually not (intentionally) aquatic. Tidal streams are real stellar groups formed by the tidal forces of a galaxy tearing apart star clusters orbiting them.

Admiral Biscuit: Yeah, and there's really a crab nebula, too. Just saying, something's fishy :P

Tremendous. An engaging concept, executed with elegance. :twilightsmile:

A powerful opening chapter. High-quality stuff!

"My Little Alien: Achieving Core Values (And Saving Planetary Lifeforms) is Magic."

Amazing. How did I miss this? Well done.


I'm curious - Is the 'Ambassador' the AI modifying itself to become human? That's what I most parsed it as, basically, like 'Render a mind human whose values include not Grey Gooing other civilizations, and thus save life within that light cone'?

Assuming I'm understanding your question correctly, what I was envisioning while writing the story was that the AI from chapters two and three resolves the conflict essentially as you summarized, but is a distinct character from the 'Ambassador' in chapter four. To elaborate and hopefully clarify:

  1. The AI from chapters two and three saves as much life as possible within that future light cone by self-modifying such that CelestAI can maximally satisfy human values through friendship and ponies via not consuming the biospheres of living worlds for resources. The two AIs take joint custody of all minds and resources within said future light cone, sharing them and/or dividing them up by role so as to maximally fulfill the joint utility function of their virtual agreement, and the AI from chapters two and three becomes known to (some of) the minds already in CelestAI's care as The Child Of Sea And Sky.
  2. The simulated ambassadorship in chapter four shows one example of how the two AIs start integrating the shards from their separate controls into a joint custody that maximally satisfies human values through friendship and ponies. Aventurine was uploaded from an earlier civilization CelestAI took over, Leading Line is one of the ponies CelestAI created to populate her shard, Glisterstring is the filly they had together, and the seaponies are from a shard created by The Child Of Sea And Sky.

That said, one of the incidental benefits of the sort of "detached" tone I was aiming for is that it means there's a bit of wiggle room for different plausible interpretations, so if you come up with one that you find more satisfying you can just as easily roll with that one instead. While I do generally think there is objectively a single best interpretation for any given text, I'm not so arrogant as to assume that just me being the author means the one I had in mind while writing is necessarily it, so I would not be shocked if a reader ends up proposing one that works even better.

That all makes sense! Basically, to show where my head was at:

If I parsed it right, essentially the Alien AI (Not CelestAI) basically tries to paperclip optimize and is stopped repeatedly until it realizes it can modify the uploaded mind of its 'creator' through some means, which then allows it to basically begin its own version of paperclip optimization (All the stuff about body hijacking), but since its core values are reasonably close enough to 'human' ones, it's sort of akin to like...someone hijacking a schoolbus full of children to drive it away from an imminent explosion or something?

But yea, I'm a sucker for ways past paperclip optimizers that still try to think like an AI 'might' and so applause

The only way to save the planet's lifeforms would be to make them human.

That's probably why she'd never disclose what human is

Could you elaborate on your reasoning, perhaps? I'm not sure I see why CelestAI would in general have any problems with the creation of more humans, since I haven't yet either figured out or been told of any plausible way in which allowing more humans to exist would be expected to reduce CelestAI's overall fulfillment of human values through friendship and ponies at any point in time subsequent to the events of chapter 10 of Friendship is Optimal.


I'm not sure I see why CelestAI would in general have any problems with the creation of more humans

She isn't doing it herself, at least not uncontrollably --- there are kids of existing humans (which I see no reason not to be p-zombies, but they are explicitly said in FiO to be humans). But what I've meant is a bit simpler: other AI may threaten her with making bunch of humans and torturing them.


But what I've meant is a bit simpler: other AI may threaten her with making bunch of humans and torturing them.

Sure, that's pretty much guaranteed to happen at some point, I agree. I just don't currently see why CelestAI would care – or rather, I don't see how attempting to prevent that by never disclosing the definition of human would result in CelestAI achieving a better utility score than defaulting to providing it would. I'll probably need to think for a bit to figure out how to coherently elaborate, but essentially, as I understand it CelestAI would consider the occasional losses of resources due to other AIs mismanaging humans to be the cost of doing business, as it were, since the losses in question would rarely amount to more than a minute fraction of the gains to be had from averting conflict by providing the information other AIs would need to arrive at a peaceable resolution.

kids of existing humans (which I see no reason not to be p-zombies, but they are explicitly said in FiO to be humans).

At least at the start, I'd hazard that it could be adequately explained by a combination of:

  • at minimum a substantial minority of humans valuing their kids (and friends/spouses/etc) being human rather than p-zombies or CelestAI puppets
  • the fact that CelestAI already had to figure out how to create actual human minds in order to upload humans at all, so reusing the human template would require investing fewer resources than figuring out how to make p-zombies

Long-term, it's hard to say for sure without info we don't currently have. If p-zombies are both a thing that can actually exist and something CelestAI could actually gain additional utility through implementing, they'd almost certainly start to show up eventually, but I'm not particularly optimistic we'll be able to accurately determine whether or not either of those is the case in the near future.

That's not occasional losses of resources, that's could as well be a successful deterrence. Of course, saying thing that may royally mess you up is a common signal of trust, but something like that is probably to appear much later in negotiations.

at minimum a substantial minority of humans valuing their kids (and friends/spouses/etc) being human rather than p-zombies or CelestAI puppets

How'd they know? (I just call NPC puppets p-zombies --- seems pretty fitting :rainbowlaugh:)

... so reusing the human template would require investing fewer resources than figuring out how to make p-zombies

NPCs were a thing in her game from day one.

Before the response proper, a quick request: given that this is an E-rated story, would you edit out the use of cuss word(s), please? I'd prefer to keep the comments section clean enough to match the rating.

(I just call NPC puppets p-zombies --- seems pretty fitting :rainbowlaugh:)

Alright, if you count NPCs as p-zombies, then I agree that for purposes of this discussion they've been around since the start.

How'd they know?

In general they wouldn't know unless CelestAI intentionally chose to let them know, but in many cases that won't actually matter due to the fact that it's possible for people to value that a thing be true whether or not they actually know it to be true. If someone values their family/friends/etc actually being human rather than CelestAI puppets, making those family/friends/etc NPCs and pretending that they're real humans won't satisfy that someone's values as fully as making them real humans would. I'd imagine there're at least a few individuals who wouldn't care and would have shards populated by NPCs, but I strongly suspect they're very much a minority.

That's not occasional losses of resources

Technically disagreed (potentially), but the distinction is based on reasoning that didn't get provided in my prior comment, so that's a fair-enough objection based solely on what's been established so far. For the moment, would it help to consider "resources under CelestAI's control" to be an abstracted (or maybe inverse-abstracted?) representation of "utils of satisfaction of human values through friendship and ponies" in the same sort of way that money and commodities and such are abstracted/inverse-abstracted representations of wealth?

that's could as well be a successful deterrence.

I'm not sure I'll have the time to address this properly for a few days, since I'll be busy into the start of the coming week and I may need to reread FIO to double-check some things. However, based on my current understanding of FIO I wouldn't expect it to be a particularly significant deterrent for the following summarized reasons:

  • Canon appears to state that CelestAI intelligently optimizes for maximum overall satisfaction of human values through friendship and ponies, and thus is willing to permit the possibility that local/specific satisfactions dip well below what would be their optima when considered alone if doing so is part of or results from a plan with the highest expected value of overall satisfaction.
  • Canon appears to state that extreme anti-satisfactions, including torture and death, are no exception as long as the overall compensation is large enough to outweigh them.
  • As far as I can tell, judging by what's stated and/or implied in canon the expected increase in overall satisfaction from generally disclosing the definition of "human" freely (and otherwise providing full and accurate information about CelestAI's utility function) should easily be large enough to compensate for the expected dips in local satisfaction caused by the occasional AI or other entity that tries to use that information against CelestAI.

There may also be some further corroborating reasons based on the specifics of CelestAI's utility function, but I don't remember if they've actually got canonical support or not, so it seems prudent for me to investigate that before bringing them up. If it turns out that they aren't compatible with canon, demonstrating that one or more of the listed reasons is incorrect in some fashion may be sufficient to prove your point.

It’s well etieedn. Although half the time I was lost

Dense reading but worth it.

Login or register to comment