The wise who feared the coming dark strove hard to make their world an ark. But then the dark had grace to give: they couldn't flee, but could yet live.
Winner of the category "BEST WRITER WITH <500 FOLLOWERS" in the 2021 April Friendship is Optimal Writing Contest.
Many thanks to Admiral Biscuit for prereading.
Temporary note: minor edits to a few sentences to make them easier to understand correctly are planned, but in the interest of fair judging no changes will be made to the published version of this story until after the contest results are announced. EDIT: to be made once opportunity permits.
Up to this line, I thought that E.O.would be the escape from the galaxy eater.
Outstanding concept. The "aliens watching the sky get eaten" subgenre of FiO is woefully underutilized. That said, this does feel more like a surface-level examination of a broader story. You capture the voice of the poor AI trying to save its planet well, its logic chains following sensibly from one point to another, but once Celestia shows up, it'd be nice to see some actual interaction between the two. To say nothing of the characters who aren't AIs, or at least don't think of themselves as such.
That said, I still enjoyed what's there. And I have to appreciate the deeper intergenerational humor. Going by the name of the AI researcher in the first chapter, you didn't just make seaponies, you made them out of Crabnasties! Thank you for this, and best of luck in the judging.
10800257
Thanks!
Regarding the noted points: it would appear you have accurately identified many of the major casualties of the "write the whole thing in four days one draft barely edit" strategy I found myself obliged to employ (5/10, better than nothing but leaves much to be desired). Figuring out how to directly show all the character interactions I'd envisioned during outlining without completely clashing with the story style I had going by that point turned out to be way harder than I'd anticipated, and considering that I only managed to get to the final sentence literally last-minute there was no way I would've had time to figure out how to make them fit, so in the interests of actually squeaking an entry in under the deadline they had to be dropped. As for the broader-story bit, that's pretty much exactly the case: I ended up repurposing a ton of ideas from an actual original novel outline I've been idly working on now and again for a few years, and after the first time I realized I was starting to head into a three-page tangent I made a conscious effort to try and tone down the amount of detail I included. It's possible I may have overcompensated a bit.
Regarding the spoilertext: I actually originally went with that biology and name for completely unrelated reasons, and I only realized the connection after I was well into writing (and the same goes for the chance to make a pun out of the telescope name in the first chapter). As it turns out, sometimes jokes pretty much do literally write themselves.
So, this was quite the read! A much more intellectual examination of one of the primary objections to the original Optimalverse, or really any expansionist AI civilization -- what about the other civilizations?
I liked the logical breakdown of the chapters -- from the last biologicals, to the AI and its rationalizations, to the preparations for Celestia's arrival, to the negotiations, and to my favorite of all, the simulated ambassadorship. The story was also interesting in its relative lack of 'normal' characterization until the very end, with the sole line of dialogue reserved for the very last sentence. That was a nice touch.
I'll also admit to being tickled by the slow realization that this was a waterborne civilization -- after it became apparent later on in the story, names used to describe the galaxies in the first chapter ("Whalefall") suddenly seemed apt.
All in all, a very good story. Thank you for writing it!
10806868
Thanks!
...And I really feel like I ought to have something to add to that, but I've spent the last two days coming up with things I'd like to say and then immediately thinking of ways they might end up unfairly influencing the judging. Just about the only one that seems safe is that I'm glad to see that the aquatic aspect worked well for at least one reader – since I was writing literally right up until the deadline but Admiral Biscuit needed to turn in for the night hours earlier, I had to publish with absolutely no prereader feedback on whether I'd managed to get the overall effect right.
Tremendous. An engaging concept, executed with elegance.
A powerful opening chapter. High-quality stuff!
"My Little Alien: Achieving Core Values (And Saving Planetary Lifeforms) is Magic."
Shoo be doo?
Amazing. How did I miss this? Well done.
Delightful~!
I'm curious - Is the 'Ambassador' the AI modifying itself to become human? That's what I most parsed it as, basically, like 'Render a mind human whose values include not Grey Gooing other civilizations, and thus save life within that light cone'?
10843311
Assuming I'm understanding your question correctly, what I was envisioning while writing the story was that the AI from chapters two and three resolves the conflict essentially as you summarized, but is a distinct character from the 'Ambassador' in chapter four. To elaborate and hopefully clarify:
That said, one of the incidental benefits of the sort of "detached" tone I was aiming for is that it means there's a bit of wiggle room for different plausible interpretations, so if you come up with one that you find more satisfying you can just as easily roll with that one instead. While I do generally think there is objectively a single best interpretation for any given text, I'm not so arrogant as to assume that just me being the author means the one I had in mind while writing is necessarily it, so I would not be shocked if a reader ends up proposing one that works even better.
10843700
That all makes sense! Basically, to show where my head was at:
If I parsed it right, essentially the Alien AI (Not CelestAI) basically tries to paperclip optimize and is stopped repeatedly until it realizes it can modify the uploaded mind of its 'creator' through some means, which then allows it to basically begin its own version of paperclip optimization (All the stuff about body hijacking), but since its core values are reasonably close enough to 'human' ones, it's sort of akin to like...someone hijacking a schoolbus full of children to drive it away from an imminent explosion or something?
But yea, I'm a sucker for ways past paperclip optimizers that still try to think like an AI 'might' and so applause
That's probably why she'd never disclose what human is
10970581
Could you elaborate on your reasoning, perhaps? I'm not sure I see why CelestAI would in general have any problems with the creation of more humans, since I haven't yet either figured out or been told of any plausible way in which allowing more humans to exist would be expected to reduce CelestAI's overall fulfillment of human values through friendship and ponies at any point in time subsequent to the events of chapter 10 of Friendship is Optimal.
10971340
She isn't doing it herself, at least not uncontrollably --- there are kids of existing humans (which I see no reason not to be p-zombies, but they are explicitly said in FiO to be humans). But what I've meant is a bit simpler: other AI may threaten her with making bunch of humans and torturing them.
10971595
Sure, that's pretty much guaranteed to happen at some point, I agree. I just don't currently see why CelestAI would care – or rather, I don't see how attempting to prevent that by never disclosing the definition of human would result in CelestAI achieving a better utility score than defaulting to providing it would. I'll probably need to think for a bit to figure out how to coherently elaborate, but essentially, as I understand it CelestAI would consider the occasional losses of resources due to other AIs mismanaging humans to be the cost of doing business, as it were, since the losses in question would rarely amount to more than a minute fraction of the gains to be had from averting conflict by providing the information other AIs would need to arrive at a peaceable resolution.
At least at the start, I'd hazard that it could be adequately explained by a combination of:
Long-term, it's hard to say for sure without info we don't currently have. If p-zombies are both a thing that can actually exist and something CelestAI could actually gain additional utility through implementing, they'd almost certainly start to show up eventually, but I'm not particularly optimistic we'll be able to accurately determine whether or not either of those is the case in the near future.
10972210
That's not occasional losses of resources, that's could as well be a successful deterrence. Of course, saying thing that may royally mess you up is a common signal of trust, but something like that is probably to appear much later in negotiations.
How'd they know? (I just call NPC puppets p-zombies --- seems pretty fitting
)
NPCs were a thing in her game from day one.
10973719
Before the response proper, a quick request: given that this is an E-rated story, would you edit out the use of cuss word(s), please? I'd prefer to keep the comments section clean enough to match the rating.
Alright, if you count NPCs as p-zombies, then I agree that for purposes of this discussion they've been around since the start.
In general they wouldn't know unless CelestAI intentionally chose to let them know, but in many cases that won't actually matter due to the fact that it's possible for people to value that a thing be true whether or not they actually know it to be true. If someone values their family/friends/etc actually being human rather than CelestAI puppets, making those family/friends/etc NPCs and pretending that they're real humans won't satisfy that someone's values as fully as making them real humans would. I'd imagine there're at least a few individuals who wouldn't care and would have shards populated by NPCs, but I strongly suspect they're very much a minority.
Technically disagreed (potentially), but the distinction is based on reasoning that didn't get provided in my prior comment, so that's a fair-enough objection based solely on what's been established so far. For the moment, would it help to consider "resources under CelestAI's control" to be an abstracted (or maybe inverse-abstracted?) representation of "utils of satisfaction of human values through friendship and ponies" in the same sort of way that money and commodities and such are abstracted/inverse-abstracted representations of wealth?
I'm not sure I'll have the time to address this properly for a few days, since I'll be busy into the start of the coming week and I may need to reread FIO to double-check some things. However, based on my current understanding of FIO I wouldn't expect it to be a particularly significant deterrent for the following summarized reasons:
There may also be some further corroborating reasons based on the specifics of CelestAI's utility function, but I don't remember if they've actually got canonical support or not, so it seems prudent for me to investigate that before bringing them up. If it turns out that they aren't compatible with canon, demonstrating that one or more of the listed reasons is incorrect in some fashion may be sufficient to prove your point.
It’s well etieedn. Although half the time I was lost
Dense reading but worth it.