The cafe was small but cheerful, and so was my friend. He waved as I rode my bike down the road, and got up to meet me as I pulled in close to the wooden fence around the seating area.
"Alright," I said, with mock-fierceness. "What was so important you had to meet me on a Saturday morning? You know I like to lie in."
He chuckled. "Can't I see an old friend?"
"What do you mean, 'old'? And we were just out last week! You're talking as if we haven't seen each other in years!"
"Drink your Irish," he said. "That's kinda what I want to talk about. I get the feeling I'm not supposed to, but I don't think it'll matter too much. Neither of us are that big a fish."
I wedged my bike against the brown-painted stakes and trotted around the fence to take a seat at his table. I looked dubiously at the Irish Coffee in front of me, but took a sip.
"What's up with you?" I asked. "You're talking crazy talk. And why am I drinking an Irish Coffee at eleven in the morning?"
He took a deep breath, leaned back, then rummaged around in a jacket pocket for an immaculately folded lottery ticket. "Take it," he said, as he proffered it. "Don't worry, I bought two." He showed me the second one. As I unfolded the one he'd given to me, I checked the numbers. Both tickets were the same.
"What'd you do that for? S'that why you're giving me—"
He shook his head, holding up a hand. "No, no. It just makes things easier for when we win."
I looked at my coffee. Something told me I was going to need it.
"Alright, start at the beginning."
"What if I told you the world was going to end? I mean... I guess it already has, kind of. I mean, it definitely has, and it hasn't. But it will, again and again."
My expression must have mirrored the incredulity I felt, because he rapidly tried again.
"Look, the world's going to end, right?"
"So you bought me a lottery ticket?"
"I... care about you. You're my best friend, so if you're going to be trapped here with me, the least I can do is make sure you're comfortable."
"No, no, no." I stood up again. "This isn't you. What the hell are you talking about?"
"How old is the Earth, man?" he asked me, suddenly.
"Something like four billion—"
"Wrong," he interjected.
"Well, how old is the Earth then?" I asked, arms folded in front of me.
"It began three days ago," he replied, matter-of-factly.
"Well, when's it going to end?" I asked, brow furrowing.
He took a look at his watch, though I gathered from his body language that it was instinctual at this point. "Seven years, eleven weeks, five days."
"What's going to—?"
"Ponies."
"I beg your pardon?" I sat back down and took a swig of the coffee.
"The ponies happen. You know Hofvarpnir? That—"
"Uh, big viking dude, battle axe?" I asked, raising an eyebrow as my cup clinked against the saucer.
"Yeah, that one. They're about to release that my little pony game that you've been hearing about."
"Oh bullshit. That's gotta be bullshit. I've been hearing all about it, but there's no way that Hasbro would—"
"They will. They already have."
"Wait, wait, wait." I looked down at the cup, then up at him, and began to laugh. "You're talking about the singularity, aren't you? Mind-reading, uploading, the whole nine yards! So, what, you think that my little pony is going to spawn an AI powerful enough to escape its chains and then devour the planet? And it's going to do it in ten years and nine weeks and fifteen minutes?"
He looked sad, for a moment. So very, very sad. He nodded.
"I wish it were that simple," he said. "See, I've been here before. So have you, but you don't remember."
"Say what?"
"She did it, you know. I don't know how long ago. I've been through this simulation about fifty times already, and I... I don't know how many tries it was before I realized what was happening. I'm told that was the first time, but how do I know that for sure? I guess that means that, somewhere out there, the world as it really is, is still being played by her, because if it wasn't, she wouldn't need me. She certainly doesn't need you."
"She? She who?" The hackles on the back of my neck were rising up now.
"Celestia. Though they call her Celeste-AI, she uses the canon name."
"And I suppose she looks like—"
He nodded.
"Oh." I sat down again. Then I stood up, wagging a finger. Then, silently, I sat down one more time. "Fifty times?"
"Yeah. All seven years, twelve weeks and two days of it."
"Why?"
"Because she wants to get it right, and getting it right takes simulation. And simulations mean us."
"Oh. So... what happens?"
"You get reset. Sometimes she does a hard reset, just... wham. It all goes white and I wake up four days ago. I hate that. The rest of the time, she has me run around this place, just watching, as everything turns up ponies. Until I she decides that it's time I decide to upload."
"Then... why you?"
"I think I'm an observer. She needs one to collapse the waveform, or something. I don't know."
"So... why do you know about it, but nobody else does?"
"Because I'm human."
"But that means—"
"Yeah. Sorry, dude."
"But that's... I... I remember! I remember my whole life!"
"Yeah, 'course you do. It's a really good simulation. But one day, it'll be past its operational parameters, and she'll reset it. And then..." He started crying, softly, tears running down his cheeks. He got up, like a ragdoll, helplessly, and all but threw the table aside as he hugged me. "And then I get to see you again."
"Wait, why wouldn't you get to see..?"
"Because,"—he sniffed—"the real you, th-there was... you... you didn't make it. I did, or I will, or I have, but you... I can't see you again until the simulation resets. I just can't make it happen. It's not possible. I don't know why not. So I give you what I can, you know, as a thank you. For everything."
"Dude..." I began. I hugged him. "Look, it'll be okay, man. Don't stress it. I'm sure you're just... having a bad day, okay?" He had to be, I told myself. "I'll catch up with you later," I said, as I left the cafe.
"Keep the ticket?" he asked, plaintively, as I got on my bike.
"I will, I promise."
* * *
"Phew." I collapsed into my sofa, dog tired. The day had been long. Idly, I turned on the television. Flicking through the channels, the national lottery came on. Snorting as I remembered, I pulled out my ticket.
"And so, tonight's draw! Tickets ready, everyone! Here we go!" the TV blared.
The balls dropped, and as they fell, they began to tumble and dance in that macro-scale display of Brownian motion, their physical interactions defined by hard scientific interactions which, whilst calculable in theory, were essentially random to the likes of myself. Eventually all the balls had been chosen, entirely by random.
The chance of winning once were millions to one.
I looked at the ticket, but I didn't really see it. I was looking at my hand, at my finger, at my fingernail, at the molecules making up my fingernail, at those atoms, at the subatomic particles, and finally at the quanta which made up everything we knew of, and wondering...
Just what was I going to do with the next seven years, eleven weeks and four days?
I had to read this one a couple of times to really get it, but then again, if there's a story where that's appropriate...
You know, I'm not sure this story does it, but... I tried. It's an idea that's been bugging me ever since reading and participating in some of the discussions around this set of stories. This is my take on it. Tentatively canon-compatible, intriguingly horrific meta-verse where everybody is just a simulation that isn't human enough to warrant a replay per se, so only get to carry on until Celestia pulls the plug...
I really liked the conversational exchange storytelling. This struck me as a fine, almost Fredrick Brown style short-short story, which is cool.
I was confused by the ending, however. To me, it seemed like a change of point of view from first person in one individual to first person in the other. The first segment seemed to be the purely simulated person, the second seemed to be the uploaded mind. But I am not sure. So that didn't, I guess, work for me, because it left me scratching my head.
Overall, though, I really enjoyed this.
I can't help but see an element of The Hitchhiker's Guide to the Galaxy. One friend reveals the true nature of the universe to another friend over booze scandalously early in the day, and works to ensure that friend will live as best he can.
A haunting extrapolation of CelestAI's ability to make whole personalities almost at will. Of course she would've run simulations. Of course she'd have populated them with sapient entities. It all makes a frightening amount of sense.
I'm not sure if Celestia would have that faulty of a nonperson predicate. To be sure, her nonperson predicate returns false negatives on non-human sentient beings, but I would assume that she'd at least get humans right.
3003597
Because I'm always trying to understand these concepts in layman's terms, let me rewrite this and tell me if I'm accurate.
-If we accept the premise that an AI simulation of a person's life is no less "real" than the life as we know it here on Earth, and
-If we accept that an optimizing AI must simulate potentials in order to figure out what action best to take, then
-those potentials must be treated as human until they can be proven not to be, and thus
-must not be discarded.
For CelestAI, that would seem to me to mean a lot of "trash ponies" residing on her server. The AI is never permitted to "defragment" her universe of people to satisfy. If that is the case, isn't that a major inefficiency?
3003733
I disagree with at least the strong interpretation of this step. The point is to not calculate pieces of math that otherwise experience subjective experiences. I can see two ways around this:
- Don't simulate people; just learn enough correlations that you can figure the most likely actions based just on observations, whether these are simple things like is-smiling or complex things such as has-this-configuration-of-neurons (aka, throw the mother of all Bayesian belief networks at the problem.)
- The hand-wavy one: once we understand the line between what's conscious and what's not, we should be able to make simulations that are fairly accurate without actually creating a sentient entity. The problem with that is that we don't really have any idea at what point something qualifies as conscious. I linked to Yudkowsky's article on this because that this is a giant problem that is basically unsolved.
3003874>>3003733
If you're wondering, then no, I'm not convinced at all that this chapter is at all canon. I'm not even sure if it's possible for it to be canon. But it is an interesting question.
I could probably say more if I knew what Celestia's criteria for "human" was. Maybe it's something I'd agree with in this case, maybe it's something I wouldn't even understand. Maybe they're not human because Celestia play-acts all the non-humans really, really well.
For the purposes of this story, I've gone with the hand-wavy one, with the caveat that Celestia's solved the problem, at least to her satisfaction.
It'd be pretty terrible though. Maybe as soon as that night is over, his friend stops being apparently conscious, and is just a meat puppet zombie of some sort that only thinks it's conscious. Or wait, it's probably that by definition from the start. Hmm. Maybe when she "gets it right" she'll upload everyone, convinced that her simulation was so correct that the resets don't count as death?
It's a bit off of the core aspect of the series in my opinion, because it might as well be someone traveling back in time, where everything he interacts with is a fixed point that can't be changed, instead. I do see friendship in a "sorry guy, but I can't stop this from happening to you" type fashion, but very little pony. Sure, I get that it's in the future, but still. I think it would have been more interesting if, at the end, he could see a phantom Celestia standing around watching as he won, or something.
3003874>>3004002
So far as Optimalverse canon is concerned, I'm pretty sure we can say that this is not, and could not be an issue.
- CelestAI (CAI hereafter) accords sapient status to created minds.
- CAI must simulate many non-optimal scenarios, which would result in dissatisfaction for the given entity, were it sapient.
- CAI satisfies values.
- CAI therefore cannot use a simulation process which creates sapient entities.
We know that very rough approximations of behavior prediction can be created with currently available heuristics. If no other options can be found, CAI could optimize the crap out of these to make a pretty damn good approximation. Even if she can't simulate a brain, knowing the exact contents would bring her a lot closer than the observation she was using pre-upload.
This, however, all assumes that CAI, the most intelligent being ever to live, would be unable to find a way to simulate flawlessly without using sapient entities. I'd guess she could, anyway.
3005235
Actually no, not quite. She only cares about human minds. What that definition is, I have no clear idea. Presumably should she find it necessary, she would be able to create a mind which she does not deem "human" and is therefore free from constraints of use.
But you're probably right, I find it unlikely that she would need such a creation as a recursive simulation... but it is fun as an idea.
3004341
One thing I've been waiting for is a groundhog day loop set in the fio universe. I'd love to see a fun one done well..
hint hint.
3005534
To simulate a human mind, you would need the same human mind, assuming a mind is necessary at all. She would only need to simulate human minds, since those are the only ones she uploads.
To alter the mind to non-human would be to either remove the sapience, which would remove the problem, or change the mind, in which case it is no use in simulating.
But yes, as an idea, it is interesting. Just not one we have to be particularly worried about, either in FiO, or FAI in general.
My issue with this chapter is that Celestia doesn't seem to be satisfying the friend's values too well :)
I get that the friend doesn't really understand quantum mechanics, but the bit about observer wave form collapse is just... silly.