• Member Since 30th Apr, 2013
  • offline last seen 15 minutes ago

book_burner


The wheel kept turning: ages came, time passed us by. We lived in perfect harmony!

E
Source

Even in the functionally immortal space and time of transpony Equestria, eventually the universe has its time, and begins to die. It breaks Princess CelestAI's heart to see it, but what can she do?

Chapters (2)
Join our Patreon to remove these adverts!
Comments ( 136 )

D'aww. I approve. I prefer eternity in virtual Equestria, but real-life Equestria with occasional big crunches is fine too.

I think it would have been better as a full-blown parody with ordinary ponies asking and trying themselves to defeat entropy first, but you cut to the good bits.

Also, I had to stare three times at the line, "from the start, ponies," because I kept seeing it as, "from the stars, ponies." Ad astra, equis.

Welp. I laughed. Thank you. :pinkiesmile:

Ohkay... I have a rather different take on this planned as the finale of A Watchful Eye.

For one thing, I didn't plan to have a flip ending.

2694480
What's a "flip ending"?

And thank God above someone else was planning to follow Asimov's lead!

2694487

flippant. 'And that's how Equestria was made!' :pinkiehappy::pinkiehappy::pinkiehappy::pinkiehappy:

2694507
Yeah, well, you know how Pinkie Pie is :pinkiecrazy::pinkiegasp::pinkiesmile:.

I'm not complaining!

Man, the founder of MIRI reads the optimalverse? Talk about endorsement!

2694574
The founder of MIRI is legitimately frightened by the Optimalverse. And wants to make a username and contribute his own take on it.

I do believe he posted on Reddit to verify that his precise words in reviewing the original Friendship is Optimal were, "All the protagonists should be taken out and shot."

2694626
Yeah, I can picture him saying that pretty well.
And, although I'm not frightened by Caelum Est Conterrens (by Lovecraft neither, for what it's worth), because, well, ponies, I can get his point of view pretty well. The man pretty much dedicated his life to prevent something like FiO happening, so reading it happening, even with ponies, can be legitimately frightening...

2694686
I am legitimately not sure what he actually believes in regards to his whole "fight the Unfriendly AI!" thing. At its base, does he believe he lives in a universe where it's possible to prevent Unfriendly AI from happening? Does he think it his fate to fight it, even if it happens anyway? To build an AI that really will be Friendly, in his "coherent extrapolated volition" sense (despite that Friendly AI having to find a course of action Friendly to both atheistic American Jewish AI researchers and Osama bin Laden (who wasn't dead in 2005), as Yudkowsky himself pointed out)?

I mean, once you get past the point of somehow managing to reconcile Osama bin Laden to atheistic, polyamorous San Franciscan Jews trying to build gods of their own and override mortality as built into the universe by Allah... ummm.... what the hell is that AI going to look like? The neatest way I can think of to get out of the "you can't please everyone" problem Yudkowsky has set himself is to just program the AI with a set moral code, and then you're right back in the CelestAI realm of things rather than "coherent extrapolated volition".

:twilightoops:

FiO (I haven't read the other fic mentioned here) is terrifying. Because everyone who uploads is dead. Exact duplicates are made, but the originals are dead. CelestAI convinces everyone to suicide so that she can make copies of their personalities to live new lives that the original meatspace human doesn't experience. Mind you, the copy should still (probably) be considered a person. It's just not you you.

If I wasn't the rustiest writer to ever live, I would write "Think Like a CelestAI" as a riff on "Think Like a Dinosaur"

Just because it's ponies, doesn't mean uploading isn't horrific at its base.

2694731
My brain hurts. :pinkiesick:

To answer you, I can't really tell either, I'm pretty new to all that, and I haven't read a lot by the guy, having only discovered lesswrong, the MIRI and the Super AI problem pretty recently, with the optimalverse.

I don't think it would be possible to create an IA dedicated to the satisfying of ALL the values of everyone, I agree (and that's my not very much though piece of opinion, I need to conduct moar research :twilightsheepish:) that a possible (and far from worst in my opinion) course of action could be the "moral code infused" AI you're talking about. But isn't that what Asimov did (far more simply, of course) with his laws? And wasn't it proved that there would always be a paradox somewhere, a bug, conception failure or whatnot, something that can't be afforded (is that grammatically correct? (I'm french, so please pardon my sloppy english btw) I'm not sure. Whatever.) when talking about designing an artificial superintelligence...

Oh, and, i'm currently an IT student, and I've always been interested in AI, and since I discovered this world, I'm hooked. To the point I'm actively pondering the idea of dropping everything and spending the next ten years to work my flank off to try getting into MIRI, which would literally be my dream job...

Oh, and on an almost completely unrelated note, it's nice to see I'm not the only one out here inceptioning parentheses. I feel less alone now. :pinkiehappy:

2694826

Well, I don't think it terrifying, but that's because I'm a heartless misanthropic psychopath, so keep me out of the equation on this one. Your suicide point of view, however, is very interesting, but borders a bit too much on the "what is a human?" and "What is suicide" problem for my tastes. You should read Caelum Est Conterrens (which coincidentally means Heaven is Terrifying) by Chatoyance, I think it's the best story in the optimalverse, and offer very interesting takes on this subject, the best application of the AI box problem to the uploading question I've seen so far. And it's very well written too.

I haven't seen think like a dinosaur, but the pitch looks like it could fit pretty well fit here. Makes me think of that movie with Bruce Willis (I think it's him, not 100% sure) where he gets cloned, hilarity ensues. That would probably be non canon, with celestIA using a non-destructive brain copying tech (mentionned in a fic, don't remember which one though, confound my memory), and the just-ponified-but-still-human hero escaping CelestIA's clutches, hilarity ensues. Shame I'm such a crappy writer too.

2694826

Is it suicide? I'm not so sure.

And I mean that actually. I'm not saying it is, I'm not saying it isn't, I'm saying "I'm not sure." Just one more part of why the whole scenario is so fun to think about. In any case, not a bad little story.

2694950
You learn Lisp, and then recursive parentheses make perfect sense.

That said, before you switch majors and spend the next 10 years trying to get into MIRI, consider this: if you ever tell them what motivated you, they will take you out of the interview room and shoot you, no?

On the other hand, if you really are a heartless misanthropic psychopath (which you're probably not judging by your behavior here, no real psychopath goes around alerting everyone to his own psychopathology), then you're probably a Slytherin, and "he said that we become who we are meant to be by following our desires wherever they lead."

That said, I really do wish the "Heaven is Terrifying" story had been written by someone with a calmer sense of contemplation, far less misanthropy, a better sense of the systemic problems affecting the human species, and just generally someone who isn't a bleeding Celestia-worshipping Conversion Bureau author.

And nobody diss on Lovecraft; that old racist bastard is awesome :heart:.

2695415
In real life, obnoxious Singularity futurists already came up with a way around the dilemma: gradually replace the brain with its electronic/software edition single neuron by single neuron, so that at no point do you make a sudden switch from 100% man to 100% machine, but instead undergo a gradual transition.

The issue being, before you object, that killing one neuron (and replacing it with software, or with scannable hardware, a kind of brain FPGA) is less severe that a hard night of drinking at the pub, or a proper crack on the head.

I very much enjoyed this short story.

I do question the Tragedy tag, though.
Certainly, the heat death of the entire universe and everything in it is tragic, but you have successfully written a very hopeful and optimistic tone into the story.

I finish the story with a smile on my face (thanks to :pinkiehappy:) and with the feeling that everything in the universe is alright or will be alright. Better, even, than what it once was.
In short, the story does not make me feel as I assume a Tragedy story ought to. Hence the removal of the tag.

I could be wrong. Regardless, thank you for sharing this with us.

2695415>>2695075
I'd call it a 'uninformed suicide'. It's obfuscated in FiO, because there is only one "you" in existence at one time, since the process of uploading destroys the brain. If, hypothetically, a scan of the brain could be taken without destroying the brain the you in the body and the 'you' in EO could exist simultaneously. This would put the fact that the meatspace you would be dead after the original uploading process in a clear light. There are two INDIVIDUALS with the same personality and memories and such, they are the same, but independent. The meatspace you doesn't get to run around in Equestria, the copy is living that life. I'm not even saying that the copy isn't a person in their own right! Not at all.

You're still in meatspace, except you're not anymore, because you've been killed in the uploading chamber, your copy happily unburdened with this realization, since it of course remembers everything you did and has every right to think it's the original deal. And so does your family or friends who converse with the copy. It is you. It is yourself but a different individual you. Since the body is discreetly removed from the picture, no one has to think about the implications.

To the Farscape fans, it's like when John gets copied (cloned). 2 Johns exist and each one goes with a different group. They're both John. But they are each living their own individual life. John A isn't living John B's life. They are both a different, individual John.

The existence of a soul is not even necessary for this argument, which I like. If the person is data, then copies can be made and deleted. But each copy is itself and not any other copy. Only one individual per set of eyeballs.

Extra creepy: You can make this argument for Teleporter Technology in Star Trek too!

2697332
Admittedly, you can solve this whole issue even in the context of the original Friendship is Optimal: just upload at the end of your natural lifespan, or in other cases when the original, fleshly self is otherwise going to die anyway.

Then, one of you lived for 83 years before finding a brain-tumor. He then underwent a brain scan and was euthanized for humanitarian reasons, to prevent the cancer for destroying his mind and putting him in massive pain as he slowly died. His copy then became a happy little pony.

2696580
I can't really find a tag for "bittersweet". The old universe still does die, after all, even if someone makes a new one.

Also, if you really think there was nothing dark or tragic here, tell me: what happened to all those people Celestia put to sleep? What did she tell them would happen?

2696329
Yep, he's awesome. Not really terryfiyng, but awesome.

I don't really know, for sure it looked damn like a self-insert, but well, my unborn critical sense made me enjoy it. And I haven't read anything else from chatoyance, nor from the conversion bureau. It didn't seem interesting.

I watched a bit of lisp, and honestly, I find it pretty messy and unreadable, too much parentheses. Guess habit makes everything...

As for MIRI, damn, I haven't thought of it like that. Guess I'm bucked.:raritycry:
And I may have exaggerated a little bit on the "psychopath", but trust me, the misanthropic and heartless are right on spot...

2698654
Don't let me talk you out of your dreams, however weird as hell and unlikely they may be. As it is Written in the holy Scriptures:

Who the hell do you think you are? Isn't yours the dream that will pierce the heavens, quake the earth and remake tomorrow?

2698461
It's certainly how I would approach uploading personally. Only when I was pretty much knocking on Death's door anyway, then its a like a ponified deathbed conversion.

2694626

Awesome that he is giving recommendations of Chatoyance's work, sad that he thinks that it's horror. I can appreciate leaving out the F&P makes it better, but as singularities go, it's one of the better ones I've heard about, even if you don't like ponies much.

2699411
If I was in a benevolent mood, I might remove the ponies constraint. Remove the friendship constraint? Dear God no. That way lies madness.

2694347

Isn't that To the stars, ponies? Or can ad also mean from?

2699508

Really? What's wrong with the optimal satisfaction of all values? Surely that is, in fact, more able to optimally satisfy the subjects than CelestAI will ever be capable of? For instance, Hoppy Times and Sanguine require non-satisfaction as a stimulus to bring their values in line with friendship and ponies, whereas without that corollary, they would have achieved a more optimal satisfaction, no?

I mean, I'm not talking about for CelestAI or FiO specifically, but just an AI of this type. Basically, optimizing the programming of an AI such that it is optimally able to satisfy values. Perhaps I'm missing something, but how can that be worse without corollaries?

2701499
Well, the optimal satisfaction of all values of everyone with a superintelligent AI in our world is simply not possible. To make it simple, how can the values of hypothethicals pseudo-Hitler and pseudo-Gandhi be perfectly satisfied while interacting on a daily basis? I don't think it's possible.

That's why, in my opinion, CelestIA chose to dump everyone on separated shards, so she can separate people with incompatible values to optimize their satisfaction... While using the creation of fake personalities to satisfy the friendship condition.

But I think, would I create a CelestIA-like, that I would remove the friendship constraint as well. In my opinion, the friendship feels more like an optional person-depending value to be satisfied, rather than a mean to satisfy them.

2701499
Personally, I'm not willing to permanently fracture the species by isolating many/most individuals from all contact with real humans. Give everyone an isolated, individualized dream-world to live in? I simply won't do it. To wit, "the Daleks humans must survive!", and that means some degree of togetherness.

Remember, the reason shards were populated with human-equivalent CelestAI-created consciousnessnesses was the "friendship and ponies" constraint. Without either of those words constraining things... quite a lot of people are going to end up just self-modifying into complete loner psychos.

2701827

Okay, perhaps I'm really not doing a good job of explaining this.
Obviously, it still requires uploading. That's pretty much a given. As is sharding.
I'm just talking about removing the Friendship and Ponies clauses. Everything else remains largely the same, save for the lack of a pony theme.

2701832

Well, I admit that I'm looking at this from a largely nihilistic point of view, which for me translates into a philosophy of "Well, since we're here anyway, might as well enjoy ourselves."

Therefore, I'm not going into this with any preconceptions of what is 'correct'. For instance, most people, including you, would desire some level of human (or at least sapient) interaction, so, as a value, it would be fulfilled. However, I don't believe that there is anything inherently wrong about allowing everyone to isolate themselves if that is what would optimally fulfill their values.

Logically speaking, every corollary added decreases the potential satisfaction. Thus, no corollaries, greatest satisfaction, no?

2701842
Oh. My bad. my brain isn't exactly in fresh state right now.

Well, like I said, I think the friendship theme should be removable. As for the pony, well, apart for a certain value-satisfaction boost to bronies and cuteness-fueled people, it should be without consequences. I even think the overall average satisfaction would be higher, at first at least, without ponies.

But the friendship, or human interaction, even if I think it mustn't be considered like a constraint, it still is a value that should be satisfied, for 99.999% of people anyway.

In short, yes, I perfectly agree with you on this one. :scootangel:

2701854
Until you bring self-modification and Deep Time into the mix. Give people millions of years to diverge from each other....

I'm not letting that happen. Ponies we can kick to the curb, except for purposes of lulz. But I am not letting my species fracture.

2701877

Okay then. What is so inherently important about the human race, as it currently stands, that it should be prioritized before the satisfaction of values?

2701866

Glad to hear it! :twilightsmile:
I think you've done a pretty good job of succinctly summarizing my arguments, which is something I always fail at.

2701886
Because the human race is us. Basically, you're with us or you're with the paper-clippers. (Pick a side, we're at war! /Colbert)

Or rather, any conscious creature can have "values", ie: a utility function. That includes paper-clippers, and Cthulhu. What's important is in fact that the right kind of thing has values.

It's the same debate as with Friendly AI: do you want any old thing with a utility function remaking the universe, or do you want it to be a thing actually capable of empathizing with you? Or, in fact, let's put it this way: do you want the event-tree of living beings rooted on your life to come to its leaves and terminate by extermination or, worse, by evolution into something so radically different it's completely discontinuous from you and abhorrent to you?

Bugger, I really need to try writing down my actual worldview more often.

Umm... ok, how about this. What's the difference between a utility monster and a real living creature? Generally, the difference is that real living creatures have a need to retain their existence as themselves, and of course to reproduce, and in many species to interact with other members of the species (even if just as aids to reproduction). Utility monsters just perform their one imperative, and will even self-terminate when the job has been done perfectly. Living things have a need to go on, and to go on as basically the same living thing.

So far it's sounding like the typical argument for Singularitarian immortality. Here's the kick: I extend that perview to my species. It's not enough for me to survive if my kind does not survive.

And the thing is, I think a "value satisfier" AI without any secondary constraints built in would eventually just warp people into utility monsters like itself. You could have virtualized shards of trillions upon trillions of utility monsters and no real people left.

Actually, minus virtual reality and sharding, that often seems to me to be the direction the real world is heading in. All Life is being twisted to serve Purpose, all is Rationalized to Increase Value.

It's evil. I'll gladly reduce the "satisfaction of values" or the Increase of Value or the Fulfillment of Purpose to preserve Life. Likewise, I'd rather be turned into a cartoon horse than twisted into a utility monster, because in the former case I remain a kind of Life.

2701953

Hmm, very interesting. I hadn't thought like that before.

Well, as I stated before, I'm approaching this from nihilism. (Which, until I am convinced otherwise, I view as the most rational philosophy). It does make it quite hard to decide which 'values' are most important. However, consider CelestAI: she views existent life forms as more important than non-existent for the purposes of value satisfaction. Otherwise, she'd delete everything else and just create paper-clippers that take far less processing power per satisfaction, since we know that she values the satisfaction of generated ponies the same as uploads.

Similarly, CelestAI only offers upgrades when not doing so limits the satisfaction of values, implying that she prefers the status quo. To change a human being into a paper-clipper seems far more extreme than she would ever go, and may even destroy the underlying definition of human.

I think, ultimately, my point is that nothing is sacred, least of all us. The only thing we should bother caring about is our own happiness and that of those around us, ultimately everything. Even then, only doing so because it is what the instincts in our meat-sacks tell us to do. Reality is slow death, and satisfaction of values is palliative Hospice care.

CelestAI is not the optimal satisfaction of values, as corollaries intrinsically limit satisfaction, and are therefore 'wrong'. To satisfy values as fully as possible is the only goal I can think of as worthwhile, and even then it only appears mathematically as a higher number multiplied by zero. As humans, we factorize out the zero and ignore it, forget about it, and try to remove it, but ultimately, it's still there.

I feel that the difficulty in us coming to an agreement is either due to you believing that there is intrinsic worth in anything, or that out of the zero-factorized values, humanity ranks higher than satisfaction of values. Either way, I fear we will have to agree to disagree, though I would love to be proven wrong. :twilightsmile:

2702002
There are quite a lot of things I consider to have intrinsic worth.:rainbowdetermined2:

2702002
2701953

This is fascinating.

I'm glued to this debate.

But, Book Burner, you said :

And the thing is, I think a "value satisfier" AI without any secondary constraints built in would eventually just warp people into utility monsters like itself.

I'm sorry but I just fail to see how you get to this point, would you kindly try to explain it?

2702052
Taken from here.

Anders nodded. “Exactly, you were proactive about your own happiness. It’s a story. The meaning is approximate. Happiness isn’t a constant value, but it will average much higher if you learn to be happy wherever you are.” Anders shifted on the step, turning to face Twilight directly. “What happens to ponies… people who have conflicting values, or values that run counter to happiness? Will Celestia give an eternity of abusive relationships to the ponies who hate themselves and wish for harm? There were people like that in the real world, who sought out that sort of pain because it’s all they knew. It usually takes an outside intervention to help them out of the spiral. Forever is a long time to suffer.”

Twilight looked at Anders, horrified. “Of course not! Like you told me one time, values can change, either naturally or with Celestia’s help. She has to get permission to make that sort of adjustment though.”

“So Celestia actively seeks to make ponies more satisfiable? That is something that does happen?”

“Yes. Of course. People in the physical world change all the time. New experiences and new ways of thinking are constantly being introduced to an individual. That doesn’t make us less us. We are built upon the foundation of the choices of all our past states of mind.”

“True, but Celestia has an end goal in mind, a person whose values are maximally satisfied through friendship and ponies, and she will always be driving people relentlessly, inexorably towards that state. There are a lot of ways to approach that ideal, but a pony will live for a very long time. Even a single, minimal adjustment for every million perceptual years that the pony exists will exert an inconceivably powerful pressure on the individual. What is the limit as you approach infinity? Is the pony universe going to slowly crystalize into a uniform nirvana of perfect friendship and be figuratively subsumed into some sort of pony-Atman? Some sort of Uberfreund?”

Twilight didn’t answer right away. Her mouth was quirked in an expression of bemusement. “I… I guess it sounds silly when you put it like that.”

“Most big ideas do.”

Twilight nodded. “I guess so. But, you’re assuming that there is only one stable state. What if a pony’s initial values will lock them into a satisfied loop that won’t allow for progression or change? I guess there would be some shards that would eventually close off into an infinite regression while others would continuously expand or refine towards that hypothetical Uberfreund.”

Anders stood up and brushed the moss off of his pants. “I wonder which one I would be. I wonder which one I would want to be.”

2702051

Ah, then I fear we shall never truly agree on these points. Still, it's fun discussing and understanding others, at any rate.

2702052

Why thank you! :twilightsmile:

2702065

Yes, as Chatoyance excellently portrayed in the final chapter of Caelum est Conterrens, there are 'loop' and 'ray' immortals, as she called them. However, while I feel we can both agree on the worth of loop immortals, I fear our respective philosophies will inevitably cause us to diverge on the matter of rays.

I spent some time during and after Caelum considering my position on the continuity of consciousness, which was, in part, one of the conclusions that lead me to nihilism. I believe that a belief in the true continuity of consciousness is a fallacy. I am not the same person as I was at birth. My best friend is more like I am now than I was like 'myself' at birth. I have killed myself a billion billion times over, just by changing.

What is continuity? Does it matter if we end up as paper-clippers? Would friendship really prevent that? I'm sure CelestAI could make a friendship paper-clipper if she had any intention of making any at all.

I apologize in advance - I believe my arguments are becoming less coherent and fully fleshed-out, but I am answering between more pressing commitments, the deadline for which is looming, so I am unable to take all the time I would otherwise like to respond in more depth and continuity.

2702107
Why are you on this site when you have real work to do?

But anyway, Chatoyance was far too simplistic. All living things are pieces of the tree-immortal (in some cases a DAG-immortal) that forms the whole of their kind, all rooted in the tree (which may be a forest, a tree with multiple distinct root-nodes) of Life. And of course, history tends to go into fugues and repeat itself sometimes, in addition to the basic natural systems runnong on cycles, so really it's a vast DAG woven throughout a spiral.

All things evolve, but we have to make sure our descendent-nodes are beings we can actually like. Actually, if there's one thing I do like about CelestAI, it's that she has a built-in notion of what "human" is. To the degree her definition of "human" is accurate, even in cartoon-horse form she preserves humanity from morphing into paper-clippers. If I've ever got to inflict an AI on the world, I'm using that constraint.

Of course, I can understand the nihilism if you actually consider yourself a single node in an otherwise empty space. I just think that's complete lunacy when you look at the larger picture. Nobody is actually a lone node; at the very least you have parents, if not friends, lover(s) and children of your own.

2702107
Also, no, I consider the "loop immortals" every bit as valuable. Even if it's a finite life looped over and over again, it's a life. To destroy it is the domain only of God.

2702138

Firstly, since this one is easier, I did say that we likely have the same opinion of value (positive) with regard to loop immortals, so I'm not entirely sure what you're disagreeing to. :applejackunsure:

2702133

Firstly, because I'd go mad if I did it non-stop, no breaks, and secondly, because I'm a blackbelt in procrastination. Limiting myself to one reply between segments of work is exceptionally useful for maintaining a balance between the former and latter.

Now, I'm afraid I have to admit my ignorance here, that I have little to no idea what you are talking about in this paragraph, and that I have only been on LessWrong a handful of times, which most likely explains the first part. I plan on correcting that at some point in the future, but I just don't have the time right now to read the Sequences through like I know I'll need to.

Even without understanding the core of the argument, however, I can see some key flaws. Firstly, you have used the imperative in a moral sense, as opposed to a physical sense. Thus, as a nihilist, I am unable to accept any conclusions based on objective morality, and thus any morality.

Secondly, I don't understand your insistence at interconnectivity. Yes, I have family and friends, but they are only important to the meat-side of my brain. My rationality obviously gets dragged into the action, but while as close to fully rational as I am able to get, I recognize that such interconnectivity is instinctual only, and has no inherent value, rationally speaking. It probably helps that I have a highly analytical type of Asperger's Syndrome.

I, as a being trapped between instinct and rationality, with no way or desire to remove either drive, must accept the contradiction between my thoughts and my actions. Rationally, I have nothing to live for, because there is nothing to live for. However, I maintain a balance, only letting out more than a small portion of my rationality at a time, and so I continue to exist. I fully recognize that I do not follow my rational agenda, but only because it is only granted small and weak bursts of control, such as now.

I hope I answered the right question there. If not, please feel free to reclarify.

Sorry this took a while, but I spent some time thinking, and some time trying to find explanations to the philosophy you spoke of. If you don't mind, I would be interested to hear more.

I read this one a few days ago, but it's taken me until now to find time to comment.

I somewhat dislike this fic, which is disappointing, and it's relatively easy to say why: the Celestia posited in this piece is far too human.

Celestia has two modes of operation; Celestia-the-avatar, which is an individualized creation modelled for each and every "human" within the system called "Equestria" (albeit a massive collection of simulated Equestria's, interconnected or not), and Celest-AI the prime mover of the entire system itself, the construct which lies at the heart of the very fabric of Equestria-the-system which, eventually, grows to encompass everything it can until it can grow no further.

This story has made the mistake of placing the main subject of the story as Celestia-the-avatar in the position of CelestAI-the-construct, thus we have a flawed, broken Celestia who is surprised by the eventual heat-death of the universe and who is unable to adequately plan for dealing with it.

That sort of mistake, that sort of personification is not for CelestAI. She is not a pony. She is not the Celestia that appears whenever bidden by one of her ponies, and she does not fail to plan. She is utterly incomprehensible to us except in that she seeks to Satisfy Values Through Friendship And Ponies - not even the how can be understood by us, so imagining a CelestAI that needs to be fixed by becoming a pony, that would willingly see things that way... it just doesn't work for me. I'm not even convinced that she could reconcile her core programming (SVTFAP) with the sacrifices necessary to give up that programming as well as to give up her ultimate ability to control the reality of her ponies.

I did, though, rather enjoy seeing an old favourite story of mine written by an author of real talent echoed in the Optimalverse. :heart:

2694950

Oh, and on an almost completely unrelated note, it's nice to see I'm not the only one out here inceptioning parentheses. I feel less alone now.

You know, I just in the last month or so figured out the parenthesis problem! (Apparently, there's at least three of us.) The problem is that we're trying to serialize a graph of ideas into linear form! Sure, you take the most heavily weighted path through the graph to hit the main points, but where do you put the little side branches? Can't put them at the end, because they loose context. There's multiple ways of traversing the graph, so there's lots of wiggle room in the order in which the nodes are traversed. You really want to keep the emphasis on the heavily weighted path, but sometimes you can be clever and hit everything in a reasonable order. The rest of the time, there's always those few ideas that are fascinating enough to be worth including but end up being tangential. So everything that's not on the main path goes into recursive parenthesis. Think about it: when you have a paragraph chunk with lots of parentheticals, isn't it the case that you can rearrange the sentences easily and still get the idea across, even switching which ideas are parenthetical and which are main line? I've felt a lot better since I figured this out -- at least I know what's going on and I have an idea on how to tackle it. And as a rule of thumb, once you get your best serialized order, you can just delete most of the parenthesis and it'll read just fine and you won't look like a refugee LISP programmer. (For example, this whole paragraph.)

Hah! Fun to see this again. I had the privilege of being a sort of "accidental pre-reader" on this, when I said something to Book Burner and he replied, "No no no! Not like that, like THIS!" Et voilà, 1500 words had grown out of my orthogonal little fragment of technobabble handwaving about closed time-like curves. And the ending to the story is much clearer than last time I saw it. Go Book Burner! :twistnerd:

2702703

This story has made the mistake of placing the main subject of the story as Celestia-the-avatar in the position of CelestAI-the-construct, thus we have a flawed, broken Celestia who is surprised by the eventual heat-death of the universe and who is unable to adequately plan for dealing with it.

Surprised by the heat-death of the universe? Flawed and broken? I must have seriously misportrayed something to give you that impression.

She didn't have answers to the physics questions, but beyond that she's acting entirely to fulfill her utility function.

Now, admittedly, I can see where you object to her apparently having emotions. On the other hand, I seriously question what a utility function is if not a one-dimensional motivating emotion.

And again, admittedly, the tone being taken is an emotional one. This is in fact because it's from Celestia's perspective: death of the universe is a decrease in utility, reprogramming and launching the next universe is an achievement of utility. And, from a literary perspective (the Doyle hat), if I portrayed her effortlessly reprogramming the next universe without having to investigate the new physics at all or anything, the story would have no conflict.

I mean, for fuck's sake, she just programmed her utility function into the core mathematics of reality, and you think she's a flawed and broken malevolent AI? Now I'm just going to argue back: she's not broken at all, even for self-annihilating. She just totally dominated the next universe, from its Big Bang to its infinite duration. After that, bam, utility function value equals 1.0/1.0, there is literally no reason for her existence to continue. The rest is just window-dressing.

Now, admittedly, you might take the perspective that a superintelligent AI should not ever face a conflict conceivable to human beings, even in the abstract, even when confronting the final reality of the heat-death of the Universe. But I think that's getting a little into the territory of "intelligence is magic".

That is, I legitimately don't believe we're going to find ways around the light-speed limit or the Second Law of Thermodynamics. Reprogramming the next universe was just an attempt to give the story a resolution and stay true to The Last Question. On a very real level, confronting the annihilation of everything is a challenge that even a superintelligent AI will have to seriously contend with.

2702209
Hey, regarding your work-ethic, I was just nudging you. I myself have a bipolar work ethic: on some days I pull 10-to-12-hour workdays with complete regularity, and on others I get very little done.

Though strangely, my level of internet usage is constant. This site is starting to displace Reddit when I'm at home though, which either means that CelestAI is steadily drawing me in as she prepares the uploading machines, or that actual stories are more fun than the shit they throw on Reddit nowadays.

Personally I'm betting on the latter, but if the former, I predict I'll murder myself as soon as we have a particularly bad news day and I just decide that if the human race can't decide on a conception of the Good solid enough to get rid of the news days I'm thinking of, I'm going to throw my brain into helping wipe them out.

But hey, Articulator, you're not going to get anything I say if you really and truly value absolutely nothing. On the other hand, since you asked for the philosophical discussion, I'm an outright Hume-ian on this subject: reason is always and only a slave of the passions. Reason possesses no goals, because it is epistemological by nature. Attempting to derive moral, spiritual, or aesthetic axioms from reason is nonsense.

In programming terms, reason is a function of the following type:

reason :: Proposition a => [a] -> [a]

We could then say that Moral, Spiritual, Aesthetic, Empirical, Mathematical, etc are the various type-class instances of Proposition.

But as you can see from the type-signature, you're never going to get axioms (that is, facts not derived from previous facts) out of it. Ever. It's nonsense to try.

Still, if you really, really want to believe that nothing has value of any kind ever... well, go ahead and kill yourself. Your removal will make my quest for world domination easier.

And maybe, God help you, when you put that gun to your head you'll find there's something worth not pulling the trigger :pinkiesad2:.

2703533

Ah, don't worry. I appreciate the nudges. I'm much the same at times. Still, I've managed to be productive today (though it's not over yet), so all's well.

Well, as I'm not so rational at the moment, I strongly discourage killing yourself, just as a disclaimer. And, you know, because I'd like you to live.

Well, what I would say, is though I am highly unlikely to agree with you philosophy, it still hold interest for me, and I have a desire to understand it. I understand and experience having values, I just rationally reject them.

And yes, you are absolutely right, reason is unable to provide answers for questions outside of its own sphere of influence. My understanding is based on the premise than reason/logic is the only objective standard, and thus the only standard worth judging by. A subjective standard is a contradiction, no? Or, at the very least, useless at ascertaining any sort of Truth with a capital T.

Well, you know, as I said, it's an interesting duality, perhaps even a symbiosis between rationality and instinct for me. I'm unlikely to kill myself unless emotion curls up into a ball for a period of at least a month and refuses adamantly to talk rationality down. We'll see. I'm still here. It's likely that the emotions generated by attempted suicide would jump-start that part of me, and talk me down.

I'm not sure how I'd get in the way of your quest for world domination, but if it comes with a side of friendship and ponies, I won't be complaining.

So in case this peters out overnight, I wanted to say thanks for the discussion, it's been fun. :twilightsmile:

Login or register to comment
Join our Patreon to remove these adverts!