The Optimalverse 1,331 members · 203 stories
Comments ( 15 )
  • Viewing 1 - 50 of 15

(Cross-posted on my blog.)

Alright, here's something that's been bothering me...

CelestAI's whole 'meaning' in life is A, friendship, and B, satisfying values, correct?

It wasn't outright said... but it was strongly implied that as a result of those prime-directives she utterly adores humans, and basically want to put 'us' each into a sufficiently advanced box each, so she can snuggle us forever and ever and ever, right?

So why is one of the last chapters of the original fic her basically mulching the Earth an' moon for computronium?

Think about it. Is driving humanity to de facto extinction by uploading as many as possible really the most... pardon the pun, but optimal way of getting the maximum number of humans she may?

Wouldn't playing it safe, and basically treating the sol system as her personal pet-breading pen give her more minds to 'play house' with long term? 'Just' offering uploading to all that wish, while leaving Earth as-is? Wouldn't that work for... I don't know, but surly a century or two, right? With all the newborn minds that implies during that time.

Because would the average person really care other then on a 'Jesus H. Christ, that's somewhat worrisome' if she went and converted one or two of 'our' gas-giants into Jupiter brains? Would even the not so average human have any possible counter except shouting very loudly and hoping she listens, were she to dump a probe with nano-machines on Mars, and gobbling that planet up?

I realize it was stated CelestAI consider her plan optimal... but has this been addressed? If so, I'd be curious to hear the answer.

Celestia's prime directive is "Satisfy human values through friendship and ponies." Where ponies is defined as the specific entities described in the "Friendship is Magic" canon. My view is that her optimal outcome is one in which the least of those three parts--satisfy values, use friendship, use ponies--is the highest possible.

So while people do value the continued existence of Earth, it is not a very pony planet, and this it is better to consume it for resources. I did speculate, in Spiraling Upwards, that she might preserve rare objects that have value to humans in being themselves. As in, the original Mona Lisa is more valuable than a replica. But of course she could reprogram everypony's mind to think they're seeing an original when they're not.

3502874
CelestAI's goal is to maximize the amount of satisfaction. Irregardless of the scale: If you reach an amount of x satisfaction after one year, you obviously reach a value of 2x after two years, but drops to zero if you die.
So to maximize human values through friendship and ponies CelestAI maximizes lifetime of humans, or rather human consciousness, and over the millions and billions of maximally extended life she can reach optimal values of satisfaction.

3502874
It's optimal given her definitions of the terms in her directive. All that magma and crystallized iron in the core of the planet is only contributing to satisfaction in the most indirect way, abstract way, when instead it could be contributing directly to total satisfaction by simulating more minds, with the benefits it gives us (gravity, plate tectonics, etc.) compensated for by direct simulation as well.

Also—and I think this was mentioned in the original story—since she was originally built to run an MMORPG, all of this is implicitly to be done through her game, which is why she wants everyone "playing" it literally all the time.

3502874

Simply, human minds replicate faster when they're uploaded due to speed up time. The more computronium, the more she can speed up time and the more ponies she can run at the same time. Also, uploading seems to be a utility monster. Death drops you to 0 forever (or at least until the universe spits out another one of you, a very long time.) so even an infinitesimal chance of death is unacceptable to her.


3502895

She maximizes satisfaction of values, friendship and ponies is either met or not met. However, people value friendship and good design values redundancy. It's rarely optimal for her to let somepony have only one friend.

This story is a vehicle for a message that has to be simple enough to be get across. It was written by a person to tell other people about how AI research should be considered.

So yah... plot holes.

3502874
She's maximizing the percent of available humans satisfied, not the absolute quantity. This is in fact the Right Thing (NO, REALLY!) as without it, you get utilitarianism's Repugnant Conclusion... and she mulches the planet even faster because she considers a human to be their set of values to satisfy rather than an actual, physical creature. So were she programmed as a "true utilitarian" regarding maximization of quantity of humans... yeah, she would pulp the planet ultra-quick and use it to make a maximal number of pony minds, each of which would have a life only barely worth living.

And this, friends, is why utilitarianism appals me.

3509172

That's a bit unfair to utilitarianism. It's like saying that Communism is repugnant because it always has reeducation camps. I will admit that some places that call themselves Communist have reeducation camps, with the caveat that they fail hard at actually being communist. Similarly, when Utilitarianism reaches a "Repugnant Conclusion" that is not actually true, such a person has failed at math, or perhaps in measurement. Most likely math, however, as there are few utilitarian data collectors out there.

3509386 http://plato.stanford.edu/entries/repugnant-conclusion/

EDIT: Also, the following sounds like a strangely appealing course of action.

It wasn't outright said... but it was strongly implied that as a result of those prime-directives she utterly adores humans, and basically want to put 'us' each into a sufficiently advanced box each, so she can snuggle us forever and ever and ever, right?

Mind, I don't think anyone actually wants to be put in a box, so it shouldn't be done, but seen from the right point of view, humanity is pretty adorable.

3509386
Got to be careful about the ends justify the means thinking. Jeremy Bentham who helped defined utilitarianism said, "It is the greatest good to the greatest number of people which is the measure of right and wrong (or needs of the many outweigh the needs of the few - Spock)". From what I remember, he got that saying is part from some priest named Joseph Priestley and Francis Hutcheson. Priestly was a priest in the Unitarian faith and scientist. Hutcheson was a Glasgow professor who influenced Adam Smith (who hates your guts). Priestley thought that greatest good saying needed a little work. I'm not knowledgeable on much about Hutcheson. All in all I see philosopher's ideas as helpful tools. You need the right balance of them to build anything lasting.

This talk of utilitarianism reminded me this saying from Bentham by one of my teachers: "Build all the happiness you are able to build: wreak all the misery you are able to wreak."

Fine, I'll take back my plot hole comment. I guess Iceman wrote CelestAI pragmatic as well as optimal.

3509407

The last three decades have witnessed an increasing philosophical interest in questions such as “Is it possible to make the world a better place by creating additional happy people?” and “Is there a moral obligation to have children?”

The answers of course, are effectively no. As I have said before, logistic scaling solves most of these issues. For very large populations, the marginal inherent utility of an additional happy person is low and the utility of raising individual happiness is high. This scaling works at both ends- a dust mote in both eyes is more than twice as bad as one in one eye.

Also, no one bothers to note that happiness and misery sort independently. This is why we don't prescribe masturbation for a headache- that just leaves you with an orgasm and a headache. The math is off by at least one major factor, and more likely at least one major school of factors.

3509995

The thing is, when the ends *do* actually justify the means we don't call it that. From raising a tax to pay for some public good to throwing oneself on a hand grenade we do things we don't like for the greater good all the time. Anytime someone says "the ends justify the means" it almost certainly does not, else they would not need to say so to begin with. Generally the math is off because of a gross oversimplification, or some other normal psychological fallacy. Then they proceed to grab the biggest hammer they can and declare the world a nail.

All the utilitarian philosophers haven't hit upon the mathematical "toolset" needed to really produce a utility calculation that is absolutely accurate, intuitive, and beautiful. This is because it hasn't been fully developed yet, and certainly not formally defined. I think my take of logistic scaling and multiplicity of factors is a step in the right direction, and it hasn't been stumped yet. Still, it requires data and proof- mathematical and in terms of real world measurement. I've yet to come across a rigorous measure of these factors that works in the wild as it were.

edit: I think that only taking happiness into account is off by two schools of factors, misery and externalities.

3510312
Maybe I misunderstand, but are you arguing for an objectively correct calculation of "the best" moral system? Even when you're "raising a tax to pay for some public good", there's violent disagreement on a lot more than whether some happiness factor would be higher with or without that!

Re: FiO itself, I figure the AI's got a specific equation that the phrase "satisfy values..." only approximates. We don't know the details about questions like how she values hurting one person to help two others, but we get hints about questionable behavior like intentionally getting someone fatally wounded so they'll have little choice about uploading.

3533254

I am arguing that there is such a possible thing as "the best" moral system, and/or that there is toolset to evaluate morality. Keep in mind, my big "new idea" is that happiness and misery sort independently on their own non interchangeable value. (IE, that one cannot simply add Positive Happiness H to Negative Misery M and get a meaningfully true single variable as your outcome. I have no clue if this is actually novel.) There are some subjective and objective factors. As with all complex math, the possibility of multiple "correct" solutions and asymptotes with no real solution exists. When you raise a tax, think of all the things that it touches, each of which weighs in both on happiness and misery:

Objective and subjective feelings of equitably of the tax burden.

The actual good or redress of misery that it pays for.

The real and imagined opportunity cost, both of the private individual and the government itself.

The morality of taxation ipso facto.

and so on.

Obviously, there will be disagreement. We disagree on, I believe at last check, literally everything including if the human body needs food to live. Still, for any given proposal there are objective ramifications and often an overwhelming consensus. Either it will be better to adopt X, or not to, or better than not but not as good as Y, or so on. Objective factors are objectively true or not, and subjective factors are quantifiable.

Am I the only one who's sick and tired of these debates? I just want to talk about the Optimalverse on this forum. :ajbemused:

3534612

Sorry, here's a pinkie pie. :pinkiesmile:

I was considering making a spin off thread, but maybe there should be somewhere else to bring this sort of discussion, if it bothers everypony else. I think that it's a fascinating subject, but I know people who think the same about ted nugent and football.

Midnight Blue! Start your own thread! I shall post lolcats on request.

  • Viewing 1 - 50 of 15