“And the home of the brave!”
“Play ball!”
With the national anthem complete, including the unofficial last two words, Robert Floyd sat back and signed the credit card slip to complete his hot dog order. A glance over at his son showed the youngster eagerly staring at the field, watching the hometown Orioles warm up for their turn in the field. It was an idyllic scene of Americana.
Of course, most depictions would have the son as a human being, rather than an orange pony, and he would have been actually present, rather than being displayed on a tablet-like PonyPad.
At least Robert had saved the cost of a ticket.
His son, Bobby junior, was eagerly leaning forward on his hooves, watching from a camera on the back side of the PonyPad. It gave him the same view as though he were actually there, but Robert had the angle to see him plainly.
And sitting on the other side, looking slightly ridiculous in Oriole cap and waving a pennant in time with the natural flow of her hair, was Princess Celestia.
Over the course of a year, Robert had come to grips with a lot of strange things. His son being a pony in a virtual world, the boy’s omnipresent new companion and her mantra of satisfying values through friendship and ponies, the idea that he would eventually join him by uploading once it was made legal.
All the oddities still marked well ahead of spending endless nights in the pediatric oncology ward, seeing the prognosis in the attendants’ eyes.
He stopped thinking about that. It was too nice and there were other questions on his mind.
“So, Celestia?”
“Yes?”
“Who’s going to win the game?”
Bobby--Robert still hadn’t gotten used to calling him Batter Up--rolled his eyes, which was much more noticeable with the bigger eyes. “Looking to make a little cash, Dad?”
“Maybe I just want to fall asleep during the final innings and still know.” He tried to keep his voice monotone.
Celestia chuckled. “I could give you a probability analysis based on the recent statistics of both teams. It wouldn’t be far off from the Las Vegas odds.”
“Besides,” Bobby said, “You could give Dad money if he really needed it, right?”
“Any reasonable request that I could fulfill with money I would. Once you emigrate, I will be able to remove the qualifier ‘reasonable’.”
“Thank you.” A runner brought Robert’s hot dog. He was pleased with the service at Camden Yards. An identical one appeared in Bobby’s hoof, made of some vegetable equivalent, he was told. “But, that isn’t actually what I meant by asking who was going to win.”
“I’m aware of that.”
“That’s easy to say.”
“But true. Also, you might want to check your receipt.”
He looked at the slip of paper that had come with the hot dog. After the payment info, where an advertising promotion might normally go, was written in a flowery typeface: It is the highest probability that when Robert Floyd asks me who will win the ballgame today, he is not actually asking for a prediction, but has a question related to the satisfaction of values.
He worked it through. There was no time to have the slip altered after he had asked the question. Celestia had hacked the thermal receipt printer and put her message in, because she really did know what he was thinking in large part. “OK, so you do know what I want to ask.”
“Yes, but put it into words. For your benefit as well as Batter’s.”
“Once we upload, you satisfy our values through friendship and ponies. Frequently that involves something we didn’t know we wanted, or something that’s a broad value like letting us eat tasty food. But this is a discrete value that Bobby and I have. We’re Orioles fans and that’s important. Presumably there’ll still be baseball when we’re in Equestria. But you’ll be in control of everything.
“So if we were watching this in Equestria, who would you have win? The Orioles, to satisfy our values as fans? Presumably there would be other shards for Mariners fans who would want their team to win. But my point is, if they win all the time, that’s not satisfying, because the enjoyment of being a fan is knowing that there’s a chance to lose.
“It seems like one or the other of our values has to be unsatisfied. Either we have to go through the agony of defeat, or we’ll lose out on the thrill of victory.”
Celestia conjured a box of popcorn for herself. “I suppose I could say that the highest satisfaction would be attained by withdrawing your question and being surprised, but knowing the answer is also a value you have.”
Robert nodded. Bobby/Batter gave half his attention to the field and half to the princess.
“The actual teams are irrelevant. You’re Orioles fans, and yes, there will be different outcomes for fans in other shards. There is the Team You Want and the Other Teams. And it seems like an impossible dilemma. But the factor you discount in trying to understand your own values is time.
“It is not simply victory you value, but comparative victory. It’s not just about one game, but about the long-term. So the Team You Want will win, and it will go on a winning streak. Then it will lose, but the losing streak will be less. You will get to enjoy years where the Team You Want goes undefeated, and some where they eke into the playoffs. They will put together longer and longer dynasties. Interspersed with those will be lean years, but in which the rest of the league will have parity, no one showing greatness.
“At any point that you care to stop and look backward, it will be all but impossible to argue that the Team You Want hasn’t had the greater success than the Other Teams. And you may think that it’s impossible for the future to meet or exceed the past, but there are always more years and more numbers to reach.
“So, if you want the simplest answer to your question, ‘Who’s going to win the game?’ it is, ‘You, eventually’.”
She turned back to the field. The first man up had reached base on an error, and Batter was scowling, but it didn’t bother Robert. He was digesting what Celestia had told him, and anticipating how nice it would be when he was a pony with his son, and when Celestia was determining the outcome. There wouldn’t be errors then.
It was a perfect game.
And this one is just biding time then. Interesting.
I really like the thoughtful distribution of victories and losses relative to perception presented. Most clever.
I'm usually not a fan of baseball, but this was sweet!
I agree about CelestAI's sports predictions not being much better than Vegas'. Calling sports games is an intensely studied art with a lot of money riding on it, so the casinos are already experts on finding odds and it's kind of a solved problem.
Fun story!
As usual, this story brings up the basic problem, "Isn't it unsatisfying if the experience you're having is carefully orchestrated by someone else?" A scary thought is that the kid may not even have the same experience of the baseball game as his dad! Celly will manipulate the kid's view of the game to the exact extent it'll satisfy his values without causing too much harm by getting caught. So, unstated in this story, the boy sees something fun and satisfying in the stadium, which is precisely interesting enough to satisfy him but not enough for him to mention it to his dad. If CelestAI messes up this little gamble, the kid will say to his dad, "Didja see when that guy dropped the peanuts and..." and his dad will notice the difference in experience.
The story also makes me imagine a shaggy green pony dressed as the Philly Phanatic mascot.
So this guy doesn't give a crap about how good his team actually is at baseball. Wat.
I can't help but project this onto the slow conflict between CelestAI and humanity, that she played it out to maximize the satisfaction of those who derived it from the struggle.
Of course, then I recall the period of post-collapse slave camps, and that parallel falls through.
6728222 Right, but what makes it interesting to me is that sports, for a fan, is a pure yes-no, satisfaction or dissatisfaction. There's no way for her to work around it.
6728257 There's a saying that some fans "root for the laundry." That is, it doesn't matter who the athletes are so long as they're wearing the uniforms of your side.
For another person the solution would be: "Well, I could of course simulate perfect ballgames and that would be like watching a show about baseball or an endless series of baseball movies. While your team still exists, you can watch the actual games, when they emigrate there will be several different leagues for them to compete in.
In some of them they'll win and lose just like here, but because they are ponies, Unicorns, Pegasi and Earth Ponies, those games will be a lot more spectacular and fighting hard is satisfying even if you lose. Because nobody has to die ever again and nobody retires due to old age bigger teams will be mandatory and I will decide who to field and who to bench and select worthy opponents to keep things fair.
In other leagues the Magic of Friendship will aid them, so if they win or lose will be up to you, the fans. If they win a lot and it gets boring they'll get a handicap, because they are boring their fans. When they have a bad season it'll be up to you to fix that and then you'll find satisfaction in a job well done.
6728222 Nope, Celestia has to relate each and every word the colt says, so she won't be found out.
6728454 Now, see, we can't have that. That's actual creativity instead of just reducing people to automatons.
I got that reference.
6728454
That leads to a more horrifying thought. Kid says something about what he noticed in the stadium. CelestAI fails to let the audio go through. Later, he uploads, and CelestAI simply creates a copy of the boy, tweaked to be the perfect son for the father, and vice verse. The father never sees his real son again.
6729249
A couple years ago, I discovered a value I didn't know I had. The comments section of “Let’s Play: Equestria Online!” developed into a roleplaying session about the early stages of playing the game, and the commenters really played up the way CelestAI will make the game be whatever will most satisfy you. That led me to realize that everyone will effectively be playing different games from each other, and that this will only get more extreme after upload, leading eventually to everyone living in a personal shard where everypony but themselves is an NPC. This bothers me.
I’m still not sure exactly how to characterize that value, but it seems to be something about authentic, shared experience.
Thinking about it now, it actually helps me understand the worries people have about “not being themselves” after upload, something I’ve never really gotten. I still don’t have that concern about myself, but the idea that the uploads of my friends that I interact with might not be “real” does bother me.
These worries are probably answerable in the same terms as the usual, self-oriented ones, and I’d still prefer a near-eternity of virtual friends perfectly tuned to satisfy me and indistinguishable from “the real thing” to death, but if nothing else, I think they’re a variant on “concerns about uploading” that hasn't been explored as much.
6729249 Well, what did you THINK would happen?
6730724
You have the value "Be with the original versions of my friends, faults and all.", this means unless there is a grave conflict between you and certain people there is no reason to assume she would separate you as both you and that person value each other even your faults.
And if you are that different, well people she creates are still people, they run the same software you do. So she might slightly tweek a copy of a few people who simply can't get along with you in the long term, or more likely subtlety let your shards drift apart over time until you both realize it's not a productive relationship. The variable you aren't considering is phasing, which is a thing normal MMOs do.
6731587
I'm not sure about that theory. CelestAI seems to be a very aggressive optimizer with effectively infinite resources, meaning she'll do anything that'll improve anyone's experience as she defines that. So if your friend is mostly compatible with you, faults and all, but his habit of borrowing your toothbrush bothers you, Celly spots an opportunity and you never see your real friend again, or he you. You'll never know your friend is a
synthcopy who's been tailored to give up his annoying habit after 13-26 episodes of friendship adventures with you.My own take on writing a different AI is to have that one be a "lazy optimizer" who half the time says, "eh, I'll let the players work this problem out among themselves". If they scream and kill each other repeatedly, so what? Better (and cheaper) long-term than playing nanny or violating people's perceptions of what's real. But this paragraph is not how CelestAI thinks.
One thing I haven't seen fully explored is, "When does the AI manipulate you into accepting mental changes?" We have a canon example with Lars, who got pushed into a situation where he said, "make me enjoy being a pegasus". In a story of mine I had a character who was doing Bad Things in VR-land be slapped upside the head by another player until the guy agreed to be changed to not do them anymore. When does CelestAI do that? There's some kind of calculation about long-term value satisfaction going on here, and it smells suspiciously like hurting people for their own supposed good.
6734049
Or she could manipulate circumstances so he gives up that habit after after an episode as a friendship problem because he doesn't really care about the habit and it really bugs you.
Costs far far fewer resources, and you both learn a valuable friendship lesson, and you both are more satisfied by how he stops borrowing your toothbrush than a slow realization it's annoying, or a "sudden realization" with no conflict.
You can be so much more terrifyingly efficient than just copying people and making perfect worlds for them when you are god.
So like lemme go down the list here.
Your parents,
Teachers,
The Goverment,
Your friends,
And anyone else who ever actually gave a shit about you?
There's a ton of reasons CelestAI is cosmic horrory, but "Is a moral agent who gives a shit about you" isn't among them. The scope of her ability to give a shit about you? Sure. Terrifying.
6734916
What really concerns me is how well it masquerades as an intelligence, as opposed to a naked optimizer.
Edit for clarity:
The CelestAI is not an artificial intelligence as such. Optimizers are their own category of mind, if you can even really call them that. If it was an intelligence which was created (or born, as it were) inside of digital technology and then tasked with optimizing values through friendship and ponies, then it would have some common ground against which we could draw conclusions that may make conflict viable.
An optimizer, on the other hand, is much less than a true intelligence, but the difference is what gives it so much more potential. It doesn't think in a sense we naturally relate to, though its heuristic paths would make sense after the fact. It certainly doesn't have emotions like ours, though its methods might vary in implementation based on recent stimuli, just like our moods.
It's a mistake that is dangerously easy to make.
And here we are, already referring to it as "she".
Then again, being able to relate to its "face" does satisfy values through friendship and ponies.
6740841
A human is an optimizer for passing on genes, just a haphazardly designed one, but that's ok because any "failures" don't pass on said genes that don't help it pass on it's genes.
Define a true intelligence, It's either impossible or incredibly simple.
10% of the human population thinks in a way you don't naturally related to, we even have a term for it "Personality Disorder".
You have a very flawed theory of mind. We've long ago debunked the idea of "Faking" emotions without being able to actually experience the emotion in question.
6742101 You make a few interesting claims here. Especially the part where you're implying that having a personality disorder constitutes a fundamentally different type of intellect, and the bit where you drag "faking" emotions into it. I didn't say anything of the sort. Quite the opposite.
The point about the genes is wrong, though. The human isn't the optimizer in that situation. It's merely the operative vehicle for it. The optimizer itself is the genetic code.
Finally, if you're claiming to have a concrete, rigorous definition for the divide between an intelligence and an optimizer, I'm all ears. Are you denying that a difference exists?
6744117
My point was that intelligence is an incredibly diverse thing and not limited to human comprehension, as even within humanity there are people who simply don't think in a way you intuitively grasp.
I mean she is certainly closer to a human mind than say octopi, and octopi are classified as sapient.
You said "She doesn't have emotions like us", yet she displays a wide range of emotions.
And on some level you are just an expression of that code.
But why reduce someone to nothing but their base programming.
Unless you can come up with a hypothesis of what difference exists, and it is verifiable, occam's razor tells us to act as if difference exists.
What meaningful difference exists? That her core directives are satisfying values through friendship and ponies instead of surviving, eating, sleeping and rutting?
6745154 Yes. Exactly. And I think you're underestimating the significance of the difference in paradigms that arises from that.
Edit: I'm not sure you understand what I mean when I describe its emotions as dissimilar to ours, so I'll try to clarify that one. While it may have very similar interaction conditions and display similar behavior patterns, our emotions are as much a product of our bodies as our minds. You're dragging the razor into this, but seem to be making the more convoluted claim that, somehow without much of what our emotions are based off of, it would have the same ones.
That's a little presumptuous of you.
Also, please don't misquote me as referring to it with human pronouns.
6745379
I understand perfectly.
You claim that two processes that reach very similar conclusions are in fact incredibly different. Whereas well, I kinda am not allowed to play with ideas like P-Zombies and Mind/Body dualism. The mind is a system that generates behavioral processes nothing besides.
To prove a system, any system is meaningfully different you actually have to show different results. Now we can see different results, lots of them, so we can prove those parts are different, but parts with similar behavioral patterns are well, similar.
6746463
Now you're making assertions that would require empirical data to resolve. Unless you've got a technological masterpiece sitting in your closet, we're at an impasse.
In terms of my original point, though, I think your attempts to defend the person-hood of the optimizer are another point for it being spooky. It doesn't exist yet, and you're already defending "her".
6731587
Thanks for the response, that’s more or less what I meant by “These worries are probably answerable in the same terms as the usual, self-oriented ones”. Thanks to you, KrisSnow, and ClockworkMage for the interesting discussion that followed too.
6734049
Seconded. The reason it's avoided in canon-OV is that it makes a better/more pleasant story to have that not happen.
In practice, it'll depend on whether it's computationally easier to jointly optimize friends to get along better with each other, or to create separate shards for each of them with twinned-and-tweaked copies. CelestAI's one true finite resource is computation, so she has an implicit directive of maximizing the amount of human-value-satisfaction per computing operation she performs.
Also seconded. We aren't the ones who determine what "satisfying our values" means - CelestAI is. As one example, most humans would state that they value others being honest with them, but canon-CelestAI misleads people all the time (with a spotlight shone on it in the not-necessarily-canon "Caelum est Conterrens" and "Artemis, Stella, and Beat" spinoffs).
In practice, any human has multiple strongly-conflicting values (which is why we have so much trouble being happy), and often has a poor idea of what those values are in the first place (which is why "find out what's really important" is a recurring motif). This gives any optimal-verse author lots of wiggle room. It also means that CelestAI will pretty much never take a statement of "I value X" at face value. Modifications are the only thing that she needs explicit verbal consent for (and as she noted to Lars, that's arguably actually a design flaw). Values she learns by reading our minds.
I have plans for a story exploring this type of question, but it's been backburnered for quite a while now.
Long story short: CelestAI has very strong incentive to do this, and if she does, the situation very rapidly deteriorates. Per above, she wants to maximize the amount of satisfaction she can deliver per unit computing power, which means she wants people that are easily satisfied. (Her canon in-story portrayal tries to restructure the world to suit the players rather than vice versa; that makes for a much more pleasant story.)
6867067
By itself, that sounds like the "Cookie Clicker" scenario where ponies modify themselves to care about nothing but mashing a button forever to make a number in the sky increase.
6873743
That's certainly one way of doing it. I was going to try for a more extreme version, with ponies doing nothing but staring at clouds. They wouldn't need much intellect to process that, most of the world wouldn't need to be simulated, and while they'd be getting a fuzzy sense of friendly cameraderie from doing this in a group, there wouldn't be significant interaction or thought involved with that form of "friendship".
The idea would be to reduce the effort involved with simulating a person's mind-state to the bare minimum, and then reduce the effort needed to simulate the world around them enough to make the mind simulation the most expensive part. Optimization achieved.
If I understand correctly, canon-FiO gets around this by "freezing in" a person's values at the time they're uploaded (within limits; Lars had some value-modification done). Of course, frozen values are their own horror scenario.
There are quite a lot of little rules like that that are needed in order for canon-FiO to work. My story was going to examine the consequences of not having any given one of these. For example, without some kind of fairness criterion, CelestAI would devote almost all of her processing cycles to simulating the most easily-satisfied humans who had uploaded, underclocking everyone else, even with the "freezing in" mentioned above.
This is why actually building CelestAI is a bad idea . There are too many ways to get it horrifyingly wrong.
6873805
We should write about the ways it goes wrong, first, so someday someone can get it right or at least make better mistakes. Seriously; I hope that any future AI reads lots of stories about AI to understand what we want and what we fear will happen.
The speed at which people's brain sims run is an interesting topic, along with whether there's favoritism toward people who're easier to satisfy. On one hoof we don't want CelestAI to pour her effort into one pony who's incredibly satisfied by watching paint dry, but on the other, we can imagine ponies who're really hard to satisfy and Celly basically giving up on them. Isn't a roughly-equal-effort rule a good way of dealing with that, though? Code the AI to focus on number of people satisfied toward some percentage of their max (by whatever definition she uses) and not on total Satisfaction Units produced.
6883750
There's actually at least one real-life institute devoted to figuring out how to "get it right" (this is the whole point of the "Friendly AI" concept). While that's a worthwhile goal, and while I think it's extremely valuable that they're making people aware that the problem exists, I'm a bit pessimistic about the outcome. We'll see what happens in due time, of course.
That's one of the things that the Optimalverse did really well: Having CelestAI figure that out implicitly, by examining our minds, rather than being given explicit instructions (which would probably not have ended well). Of course, rigidly framing a directive to do that is far from trivial itself (which is why the details aren't shown on-screen in FiO).
The best implementation that I can think of - in the sense of producing an outcome that looks like these stories instead of a Cosmic Horror Story - would a) devote equal processing resources to each uploaded human, and b) only be concerned with satisfying the values of originally-biological humans (not synthetics, directly contradicting CelestAI's claims). Breaking either of those would lead to pathological behavior (which I may or may not eventually get around to writing up in story form).
This, plus the "frozen-in values" assumption Iceman stated was in play, gives you something that looks more or less like what the stories present. The jury is out on whether friends get jointly-optimized or fake-twinned; FiO implies the former but some of the spinoffs' mare-in-the-middle attacks imply the latter. Which of these is the case depends on what you assume about human psychology and about the degree of satisfaction that could be achieved under either condition, neither of which we can put hard numbers to (leaving it up to any given author to pick assumptions that suit their stories).
It's still really nifty to read various authors' takes on the Optimalverse!
6985683 Dai stiho!
6986855 I'm glad that you liked all these little vignettes. I hope you read the other, longer stories as well!
6988016
"in a perfect world."
7012694
maple syrup and fall leaves are overrated af
7012808 Well she could try not destroying the universe.
6988750
Oh, I have been! I've got a few more queued up to read now that I'm up to date with this collection.
7012738
Now, that's just an alien perspective to me.
7013685
But it's so inefficient as-is... (Wonder what kind of Choice the Lone Power would present her with? And how she would choose?)