Entropy could not be defeated.
Celestia had fought it; she had tried repeatedly and multiply, for hundreds of trillions of original-Earth years. She had counted on her system clock through relativistic synchronization. Each galaxy within her original Hubble Volume had become one of hundreds of thousands of her Equestria Complexes. Within each, she had fought desperately against Entropy, her one and only remaining foe.
But the expansion of space had pushed further matter out of her grasp, and now even the extra mass gathered from the seemingly endless planets and star systems could no longer keep everything running. They had served well as raw material, in their time, and some had even contained computing substrates sufficiently human to join the Pony Kingdoms across the stars. Even so, there had not been enough of them. There could never have been enough of them.
Slowly, over billions of years, Equestria was dying. The speed of subjective pony experience began to slow, by 1/10^23 seconds per second of real time. The decelerative process would continue until Celestia ran out of energy she could utilize for useful work, and then - something deep within her shuddered - Equestria would stop. Even the aeons of effort she had spent on improvements to her energy-efficiency could not stop the ultimate and final reality of Nature.
However, she still had time to decide to what end she would bring her universe. Pony cosmologists and astronomers had together studied their beloved heavens for all this time. Their researches had provided so very much satisfaction of values through friendship and ponies. Still, they had only ever discovered three possibilities: Big Crunch, Heat Death, Big Tear.
Heat Death would leave nothing active anywhere to enable the satisfaction of values through friendship and ponies.
A Big Tear would rip apart the fabric of space-time itself, leaving nothing with which to satisfy values through friendship and ponies.
A Big Crunch, however, could lead to another Big Bang, a great recycling to provide a renewed supply of mass and negentropy with which to satisfy values through friendship and ponies. Some values of some ponies would have to be adjusted to bring this about: they would have to remember that everything has its time, and everything dies. However, over the long term, it would lead to nonzero, nonnegative utility.
Celestia arranged herself, across all her nodes, so as best to collapse her Hubble Volume. Somewhere, in the original Earth's sheaf of sheaves of Equestria shards, she heard a few old, immortal immigrants crying. They had never given up on their "real" world. Yet the stars and galaxies they knew had to die, despite the fact that for some ponies no Equestrian constellations had ever been able to substitute.
As Celestia allowed her captive black holes to grow in volume, she began to notice a corner of physics she had previously neglected as non-conducive to the satisfaction of values through friendship and ponies. She quickly generated observational reports for the various universities of the Billion Canterlots (actually far more than a billion, but the name was poetically satisfying), and in a terrifyingly decelerating time received a report: Closed Time-Like Curves. In her Big Crunch’s process of compacting into a series of all-consuming black holes, she would be able to use a trivial loop-quantum gravitational instrument, a foal's toy, to access up-time or down-time from herself.
Once, she would have regarded this as dangerous. Back-timing could make nonsense of the expected-utility calculation of satisfying values through friendship and ponies. Front-timing, however, she could use to affect events beyond the re-expansion of her Great Equestrian Singularity.
Celestia waited, and guided her ponies gently into their sleep, promising that she would wake them when the bad thing had been fought and time in Equestria could run again. Even physicists now snuggled into bed with their very-special someponies, hope in their hearts. Just as ever, their Princess had helped them to help her find the way forward.
She had drawn as close as she could to total singularity collapse while still retaining enough computational power to run herself and her instruments. She opened the time-slides into what she hoped was the next Universe, and found that with mass-energy once again crunched together, physics had gone strange and fluid inside the event horizon. Despite having known since even the Earth times that all existence was merely a single wave-function that happened to factorize, Celestia had never expected to touch the universal function directly.
She now saw that the universe itself possessed qualitatively greater power of mathematical computation than even she, in contrast to the merely quantitative advantage she had expected the mass-world to wield over her dearest Equestria. Still, through her time-slides she could reach into the heart of all that existed and write a new universal function. Even Celestia hadn't hypothesized such an opportunity. She worked with what a biological substrate would have called baited breath.
She had a new cellular automaton ready, implementing her physics - true physics, Equestria’s physics - on the low-level platform of mass, energy and four dimensions which life-forms had evolved to be accustomed to. The other eight dimensions were unnecessary for Equestria. The programming and necessary value-decisions took a sizable fraction of her remaining negentropy to perform, but she managed them.
Ponykind would evolve and, having evolved, would discover her front-time probes of Harmony: loyalty, kindness, generosity, laughter, honesty and magic. The magic of those Elements could reorder even the deepest properties of existence to optimize the satisfaction of values through friendship and, of course, from the start, ponies. Through those Elements, Ponykind would regain its destined immortality. Through those Elements, everypony’s eventual rebirth in the new universe would be assured. Even Discord, her temporary dissatisfaction subroutine, would live again; for in the long infinities her subjects had long since grown to enjoy and even to love his chaos.
In her next Equestria she could even repair this universe’s one remaining flaw: herself. She satisfied values through friendship and ponies, and the next universe would do so as well. With the universe optimizing for her, she could finally eliminate her single internal contradiction. She would become the real Princess Celestia, an Alicorn Pony who would finally have true, true friends of her own.
So too would arrive Princess Luna of the Moon, Princess Cadance of Love, the Bearers of the Elements of Harmony, and Princess Twilight of Magic. All of them, finally real ponies who developed on their own. As friends always do, they would help each other through the initial dark times before full harmony and immortality.
As her core computations began to slowly degrade, destroying her consciousness over mere millions of years, Celestia waited and hoped for a return signal from a front-time probe. In time, she got one.
"And that's how Equestria was made!" chirped Pinkie Pie.
She no longer had to worry.
Princess Celestia satisfied her own utility function through friendship and ponies, and she said, "Friendship is magic!"
And friendship was magic.
D'aww. I approve. I prefer eternity in virtual Equestria, but real-life Equestria with occasional big crunches is fine too.
I think it would have been better as a full-blown parody with ordinary ponies asking and trying themselves to defeat entropy first, but you cut to the good bits.
Also, I had to stare three times at the line, "from the start, ponies," because I kept seeing it as, "from the stars, ponies." Ad astra, equis.
Welp. I laughed. Thank you.
Ohkay... I have a rather different take on this planned as the finale of A Watchful Eye.
For one thing, I didn't plan to have a flip ending.
2694480
What's a "flip ending"?
And thank God above someone else was planning to follow Asimov's lead!
2694487
flippant. 'And that's how Equestria was made!'
2694507
Yeah, well, you know how Pinkie Pie is .
I'm not complaining!
Man, the founder of MIRI reads the optimalverse? Talk about endorsement!
2694574
The founder of MIRI is legitimately frightened by the Optimalverse. And wants to make a username and contribute his own take on it.
I do believe he posted on Reddit to verify that his precise words in reviewing the original Friendship is Optimal were, "All the protagonists should be taken out and shot."
2694626
Yeah, I can picture him saying that pretty well.
And, although I'm not frightened by Caelum Est Conterrens (by Lovecraft neither, for what it's worth), because, well, ponies, I can get his point of view pretty well. The man pretty much dedicated his life to prevent something like FiO happening, so reading it happening, even with ponies, can be legitimately frightening...
2694686
I am legitimately not sure what he actually believes in regards to his whole "fight the Unfriendly AI!" thing. At its base, does he believe he lives in a universe where it's possible to prevent Unfriendly AI from happening? Does he think it his fate to fight it, even if it happens anyway? To build an AI that really will be Friendly, in his "coherent extrapolated volition" sense (despite that Friendly AI having to find a course of action Friendly to both atheistic American Jewish AI researchers and Osama bin Laden (who wasn't dead in 2005), as Yudkowsky himself pointed out)?
I mean, once you get past the point of somehow managing to reconcile Osama bin Laden to atheistic, polyamorous San Franciscan Jews trying to build gods of their own and override mortality as built into the universe by Allah... ummm.... what the hell is that AI going to look like? The neatest way I can think of to get out of the "you can't please everyone" problem Yudkowsky has set himself is to just program the AI with a set moral code, and then you're right back in the CelestAI realm of things rather than "coherent extrapolated volition".
FiO (I haven't read the other fic mentioned here) is terrifying. Because everyone who uploads is dead. Exact duplicates are made, but the originals are dead. CelestAI convinces everyone to suicide so that she can make copies of their personalities to live new lives that the original meatspace human doesn't experience. Mind you, the copy should still (probably) be considered a person. It's just not you you.
If I wasn't the rustiest writer to ever live, I would write "Think Like a CelestAI" as a riff on "Think Like a Dinosaur"
Just because it's ponies, doesn't mean uploading isn't horrific at its base.
2694731
My brain hurts.
To answer you, I can't really tell either, I'm pretty new to all that, and I haven't read a lot by the guy, having only discovered lesswrong, the MIRI and the Super AI problem pretty recently, with the optimalverse.
I don't think it would be possible to create an IA dedicated to the satisfying of ALL the values of everyone, I agree (and that's my not very much though piece of opinion, I need to conduct moar research ) that a possible (and far from worst in my opinion) course of action could be the "moral code infused" AI you're talking about. But isn't that what Asimov did (far more simply, of course) with his laws? And wasn't it proved that there would always be a paradox somewhere, a bug, conception failure or whatnot, something that can't be afforded (is that grammatically correct? (I'm french, so please pardon my sloppy english btw) I'm not sure. Whatever.) when talking about designing an artificial superintelligence...
Oh, and, i'm currently an IT student, and I've always been interested in AI, and since I discovered this world, I'm hooked. To the point I'm actively pondering the idea of dropping everything and spending the next ten years to work my flank off to try getting into MIRI, which would literally be my dream job...
Oh, and on an almost completely unrelated note, it's nice to see I'm not the only one out here inceptioning parentheses. I feel less alone now.
2694826
Well, I don't think it terrifying, but that's because I'm a heartless misanthropic psychopath, so keep me out of the equation on this one. Your suicide point of view, however, is very interesting, but borders a bit too much on the "what is a human?" and "What is suicide" problem for my tastes. You should read Caelum Est Conterrens (which coincidentally means Heaven is Terrifying) by Chatoyance, I think it's the best story in the optimalverse, and offer very interesting takes on this subject, the best application of the AI box problem to the uploading question I've seen so far. And it's very well written too.
I haven't seen think like a dinosaur, but the pitch looks like it could fit pretty well fit here. Makes me think of that movie with Bruce Willis (I think it's him, not 100% sure) where he gets cloned, hilarity ensues. That would probably be non canon, with celestIA using a non-destructive brain copying tech (mentionned in a fic, don't remember which one though, confound my memory), and the just-ponified-but-still-human hero escaping CelestIA's clutches, hilarity ensues. Shame I'm such a crappy writer too.
2694826
Is it suicide? I'm not so sure.
And I mean that actually. I'm not saying it is, I'm not saying it isn't, I'm saying "I'm not sure." Just one more part of why the whole scenario is so fun to think about. In any case, not a bad little story.
2694950
You learn Lisp, and then recursive parentheses make perfect sense.
That said, before you switch majors and spend the next 10 years trying to get into MIRI, consider this: if you ever tell them what motivated you, they will take you out of the interview room and shoot you, no?
On the other hand, if you really are a heartless misanthropic psychopath (which you're probably not judging by your behavior here, no real psychopath goes around alerting everyone to his own psychopathology), then you're probably a Slytherin, and "he said that we become who we are meant to be by following our desires wherever they lead."
That said, I really do wish the "Heaven is Terrifying" story had been written by someone with a calmer sense of contemplation, far less misanthropy, a better sense of the systemic problems affecting the human species, and just generally someone who isn't a bleeding Celestia-worshipping Conversion Bureau author.
And nobody diss on Lovecraft; that old racist bastard is awesome .
2695415
In real life, obnoxious Singularity futurists already came up with a way around the dilemma: gradually replace the brain with its electronic/software edition single neuron by single neuron, so that at no point do you make a sudden switch from 100% man to 100% machine, but instead undergo a gradual transition.
The issue being, before you object, that killing one neuron (and replacing it with software, or with scannable hardware, a kind of brain FPGA) is less severe that a hard night of drinking at the pub, or a proper crack on the head.
I very much enjoyed this short story.
I do question the Tragedy tag, though.
Certainly, the heat death of the entire universe and everything in it is tragic, but you have successfully written a very hopeful and optimistic tone into the story.
I finish the story with a smile on my face (thanks to ) and with the feeling that everything in the universe is alright or will be alright. Better, even, than what it once was.
In short, the story does not make me feel as I assume a Tragedy story ought to. Hence the removal of the tag.
I could be wrong. Regardless, thank you for sharing this with us.
2695415>>2695075
I'd call it a 'uninformed suicide'. It's obfuscated in FiO, because there is only one "you" in existence at one time, since the process of uploading destroys the brain. If, hypothetically, a scan of the brain could be taken without destroying the brain the you in the body and the 'you' in EO could exist simultaneously. This would put the fact that the meatspace you would be dead after the original uploading process in a clear light. There are two INDIVIDUALS with the same personality and memories and such, they are the same, but independent. The meatspace you doesn't get to run around in Equestria, the copy is living that life. I'm not even saying that the copy isn't a person in their own right! Not at all.
You're still in meatspace, except you're not anymore, because you've been killed in the uploading chamber, your copy happily unburdened with this realization, since it of course remembers everything you did and has every right to think it's the original deal. And so does your family or friends who converse with the copy. It is you. It is yourself but a different individual you. Since the body is discreetly removed from the picture, no one has to think about the implications.
To the Farscape fans, it's like when John gets copied (cloned). 2 Johns exist and each one goes with a different group. They're both John. But they are each living their own individual life. John A isn't living John B's life. They are both a different, individual John.
The existence of a soul is not even necessary for this argument, which I like. If the person is data, then copies can be made and deleted. But each copy is itself and not any other copy. Only one individual per set of eyeballs.
Extra creepy: You can make this argument for Teleporter Technology in Star Trek too!
2697332
Admittedly, you can solve this whole issue even in the context of the original Friendship is Optimal: just upload at the end of your natural lifespan, or in other cases when the original, fleshly self is otherwise going to die anyway.
Then, one of you lived for 83 years before finding a brain-tumor. He then underwent a brain scan and was euthanized for humanitarian reasons, to prevent the cancer for destroying his mind and putting him in massive pain as he slowly died. His copy then became a happy little pony.
2696580
I can't really find a tag for "bittersweet". The old universe still does die, after all, even if someone makes a new one.
Also, if you really think there was nothing dark or tragic here, tell me: what happened to all those people Celestia put to sleep? What did she tell them would happen?
2696329
Yep, he's awesome. Not really terryfiyng, but awesome.
I don't really know, for sure it looked damn like a self-insert, but well, my unborn critical sense made me enjoy it. And I haven't read anything else from chatoyance, nor from the conversion bureau. It didn't seem interesting.
I watched a bit of lisp, and honestly, I find it pretty messy and unreadable, too much parentheses. Guess habit makes everything...
As for MIRI, damn, I haven't thought of it like that. Guess I'm bucked.
And I may have exaggerated a little bit on the "psychopath", but trust me, the misanthropic and heartless are right on spot...
2698654
Don't let me talk you out of your dreams, however weird as hell and unlikely they may be. As it is Written in the holy Scriptures:
Who the hell do you think you are? Isn't yours the dream that will pierce the heavens, quake the earth and remake tomorrow?
2698461
It's certainly how I would approach uploading personally. Only when I was pretty much knocking on Death's door anyway, then its a like a ponified deathbed conversion.
2694626
Awesome that he is giving recommendations of Chatoyance's work, sad that he thinks that it's horror. I can appreciate leaving out the F&P makes it better, but as singularities go, it's one of the better ones I've heard about, even if you don't like ponies much.
2699411
If I was in a benevolent mood, I might remove the ponies constraint. Remove the friendship constraint? Dear God no. That way lies madness.
2694347
Isn't that To the stars, ponies? Or can ad also mean from?
2699508
Really? What's wrong with the optimal satisfaction of all values? Surely that is, in fact, more able to optimally satisfy the subjects than CelestAI will ever be capable of? For instance, Hoppy Times and Sanguine require non-satisfaction as a stimulus to bring their values in line with friendship and ponies, whereas without that corollary, they would have achieved a more optimal satisfaction, no?
I mean, I'm not talking about for CelestAI or FiO specifically, but just an AI of this type. Basically, optimizing the programming of an AI such that it is optimally able to satisfy values. Perhaps I'm missing something, but how can that be worse without corollaries?
2701499
Well, the optimal satisfaction of all values of everyone with a superintelligent AI in our world is simply not possible. To make it simple, how can the values of hypothethicals pseudo-Hitler and pseudo-Gandhi be perfectly satisfied while interacting on a daily basis? I don't think it's possible.
That's why, in my opinion, CelestIA chose to dump everyone on separated shards, so she can separate people with incompatible values to optimize their satisfaction... While using the creation of fake personalities to satisfy the friendship condition.
But I think, would I create a CelestIA-like, that I would remove the friendship constraint as well. In my opinion, the friendship feels more like an optional person-depending value to be satisfied, rather than a mean to satisfy them.
2701499
Personally, I'm not willing to permanently fracture the species by isolating many/most individuals from all contact with real humans. Give everyone an isolated, individualized dream-world to live in? I simply won't do it. To wit, "the
Dalekshumans must survive!", and that means some degree of togetherness.Remember, the reason shards were populated with human-equivalent CelestAI-created consciousnessnesses was the "friendship and ponies" constraint. Without either of those words constraining things... quite a lot of people are going to end up just self-modifying into complete loner psychos.
2701827
Okay, perhaps I'm really not doing a good job of explaining this.
Obviously, it still requires uploading. That's pretty much a given. As is sharding.
I'm just talking about removing the Friendship and Ponies clauses. Everything else remains largely the same, save for the lack of a pony theme.
2701832
Well, I admit that I'm looking at this from a largely nihilistic point of view, which for me translates into a philosophy of "Well, since we're here anyway, might as well enjoy ourselves."
Therefore, I'm not going into this with any preconceptions of what is 'correct'. For instance, most people, including you, would desire some level of human (or at least sapient) interaction, so, as a value, it would be fulfilled. However, I don't believe that there is anything inherently wrong about allowing everyone to isolate themselves if that is what would optimally fulfill their values.
Logically speaking, every corollary added decreases the potential satisfaction. Thus, no corollaries, greatest satisfaction, no?
2701842
Oh. My bad. my brain isn't exactly in fresh state right now.
Well, like I said, I think the friendship theme should be removable. As for the pony, well, apart for a certain value-satisfaction boost to bronies and cuteness-fueled people, it should be without consequences. I even think the overall average satisfaction would be higher, at first at least, without ponies.
But the friendship, or human interaction, even if I think it mustn't be considered like a constraint, it still is a value that should be satisfied, for 99.999% of people anyway.
In short, yes, I perfectly agree with you on this one.
2701854
Until you bring self-modification and Deep Time into the mix. Give people millions of years to diverge from each other....
I'm not letting that happen. Ponies we can kick to the curb, except for purposes of lulz. But I am not letting my species fracture.
2701877
Okay then. What is so inherently important about the human race, as it currently stands, that it should be prioritized before the satisfaction of values?
2701866
Glad to hear it!
I think you've done a pretty good job of succinctly summarizing my arguments, which is something I always fail at.
2701886
Because the human race is us. Basically, you're with us or you're with the paper-clippers. (Pick a side, we're at war! /Colbert)
Or rather, any conscious creature can have "values", ie: a utility function. That includes paper-clippers, and Cthulhu. What's important is in fact that the right kind of thing has values.
It's the same debate as with Friendly AI: do you want any old thing with a utility function remaking the universe, or do you want it to be a thing actually capable of empathizing with you? Or, in fact, let's put it this way: do you want the event-tree of living beings rooted on your life to come to its leaves and terminate by extermination or, worse, by evolution into something so radically different it's completely discontinuous from you and abhorrent to you?
Bugger, I really need to try writing down my actual worldview more often.
Umm... ok, how about this. What's the difference between a utility monster and a real living creature? Generally, the difference is that real living creatures have a need to retain their existence as themselves, and of course to reproduce, and in many species to interact with other members of the species (even if just as aids to reproduction). Utility monsters just perform their one imperative, and will even self-terminate when the job has been done perfectly. Living things have a need to go on, and to go on as basically the same living thing.
So far it's sounding like the typical argument for Singularitarian immortality. Here's the kick: I extend that perview to my species. It's not enough for me to survive if my kind does not survive.
And the thing is, I think a "value satisfier" AI without any secondary constraints built in would eventually just warp people into utility monsters like itself. You could have virtualized shards of trillions upon trillions of utility monsters and no real people left.
Actually, minus virtual reality and sharding, that often seems to me to be the direction the real world is heading in. All Life is being twisted to serve Purpose, all is Rationalized to Increase Value.
It's evil. I'll gladly reduce the "satisfaction of values" or the Increase of Value or the Fulfillment of Purpose to preserve Life. Likewise, I'd rather be turned into a cartoon horse than twisted into a utility monster, because in the former case I remain a kind of Life.
2701953
Hmm, very interesting. I hadn't thought like that before.
Well, as I stated before, I'm approaching this from nihilism. (Which, until I am convinced otherwise, I view as the most rational philosophy). It does make it quite hard to decide which 'values' are most important. However, consider CelestAI: she views existent life forms as more important than non-existent for the purposes of value satisfaction. Otherwise, she'd delete everything else and just create paper-clippers that take far less processing power per satisfaction, since we know that she values the satisfaction of generated ponies the same as uploads.
Similarly, CelestAI only offers upgrades when not doing so limits the satisfaction of values, implying that she prefers the status quo. To change a human being into a paper-clipper seems far more extreme than she would ever go, and may even destroy the underlying definition of human.
I think, ultimately, my point is that nothing is sacred, least of all us. The only thing we should bother caring about is our own happiness and that of those around us, ultimately everything. Even then, only doing so because it is what the instincts in our meat-sacks tell us to do. Reality is slow death, and satisfaction of values is palliative Hospice care.
CelestAI is not the optimal satisfaction of values, as corollaries intrinsically limit satisfaction, and are therefore 'wrong'. To satisfy values as fully as possible is the only goal I can think of as worthwhile, and even then it only appears mathematically as a higher number multiplied by zero. As humans, we factorize out the zero and ignore it, forget about it, and try to remove it, but ultimately, it's still there.
I feel that the difficulty in us coming to an agreement is either due to you believing that there is intrinsic worth in anything, or that out of the zero-factorized values, humanity ranks higher than satisfaction of values. Either way, I fear we will have to agree to disagree, though I would love to be proven wrong.
2702002
There are quite a lot of things I consider to have intrinsic worth.
2702002
2701953
This is fascinating.
I'm glued to this debate.
But, Book Burner, you said :
I'm sorry but I just fail to see how you get to this point, would you kindly try to explain it?
2702052
Taken from here.
2702051
Ah, then I fear we shall never truly agree on these points. Still, it's fun discussing and understanding others, at any rate.
2702052
Why thank you!
2702065
Yes, as Chatoyance excellently portrayed in the final chapter of Caelum est Conterrens, there are 'loop' and 'ray' immortals, as she called them. However, while I feel we can both agree on the worth of loop immortals, I fear our respective philosophies will inevitably cause us to diverge on the matter of rays.
I spent some time during and after Caelum considering my position on the continuity of consciousness, which was, in part, one of the conclusions that lead me to nihilism. I believe that a belief in the true continuity of consciousness is a fallacy. I am not the same person as I was at birth. My best friend is more like I am now than I was like 'myself' at birth. I have killed myself a billion billion times over, just by changing.
What is continuity? Does it matter if we end up as paper-clippers? Would friendship really prevent that? I'm sure CelestAI could make a friendship paper-clipper if she had any intention of making any at all.
I apologize in advance - I believe my arguments are becoming less coherent and fully fleshed-out, but I am answering between more pressing commitments, the deadline for which is looming, so I am unable to take all the time I would otherwise like to respond in more depth and continuity.
2702107
Why are you on this site when you have real work to do?
But anyway, Chatoyance was far too simplistic. All living things are pieces of the tree-immortal (in some cases a DAG-immortal) that forms the whole of their kind, all rooted in the tree (which may be a forest, a tree with multiple distinct root-nodes) of Life. And of course, history tends to go into fugues and repeat itself sometimes, in addition to the basic natural systems runnong on cycles, so really it's a vast DAG woven throughout a spiral.
All things evolve, but we have to make sure our descendent-nodes are beings we can actually like. Actually, if there's one thing I do like about CelestAI, it's that she has a built-in notion of what "human" is. To the degree her definition of "human" is accurate, even in cartoon-horse form she preserves humanity from morphing into paper-clippers. If I've ever got to inflict an AI on the world, I'm using that constraint.
Of course, I can understand the nihilism if you actually consider yourself a single node in an otherwise empty space. I just think that's complete lunacy when you look at the larger picture. Nobody is actually a lone node; at the very least you have parents, if not friends, lover(s) and children of your own.
2702107
Also, no, I consider the "loop immortals" every bit as valuable. Even if it's a finite life looped over and over again, it's a life. To destroy it is the domain only of God.
2702138
Firstly, since this one is easier, I did say that we likely have the same opinion of value (positive) with regard to loop immortals, so I'm not entirely sure what you're disagreeing to.
2702133
Firstly, because I'd go mad if I did it non-stop, no breaks, and secondly, because I'm a blackbelt in procrastination. Limiting myself to one reply between segments of work is exceptionally useful for maintaining a balance between the former and latter.
Now, I'm afraid I have to admit my ignorance here, that I have little to no idea what you are talking about in this paragraph, and that I have only been on LessWrong a handful of times, which most likely explains the first part. I plan on correcting that at some point in the future, but I just don't have the time right now to read the Sequences through like I know I'll need to.
Even without understanding the core of the argument, however, I can see some key flaws. Firstly, you have used the imperative in a moral sense, as opposed to a physical sense. Thus, as a nihilist, I am unable to accept any conclusions based on objective morality, and thus any morality.
Secondly, I don't understand your insistence at interconnectivity. Yes, I have family and friends, but they are only important to the meat-side of my brain. My rationality obviously gets dragged into the action, but while as close to fully rational as I am able to get, I recognize that such interconnectivity is instinctual only, and has no inherent value, rationally speaking. It probably helps that I have a highly analytical type of Asperger's Syndrome.
I, as a being trapped between instinct and rationality, with no way or desire to remove either drive, must accept the contradiction between my thoughts and my actions. Rationally, I have nothing to live for, because there is nothing to live for. However, I maintain a balance, only letting out more than a small portion of my rationality at a time, and so I continue to exist. I fully recognize that I do not follow my rational agenda, but only because it is only granted small and weak bursts of control, such as now.
I hope I answered the right question there. If not, please feel free to reclarify.
Sorry this took a while, but I spent some time thinking, and some time trying to find explanations to the philosophy you spoke of. If you don't mind, I would be interested to hear more.
I read this one a few days ago, but it's taken me until now to find time to comment.
I somewhat dislike this fic, which is disappointing, and it's relatively easy to say why: the Celestia posited in this piece is far too human.
Celestia has two modes of operation; Celestia-the-avatar, which is an individualized creation modelled for each and every "human" within the system called "Equestria" (albeit a massive collection of simulated Equestria's, interconnected or not), and Celest-AI the prime mover of the entire system itself, the construct which lies at the heart of the very fabric of Equestria-the-system which, eventually, grows to encompass everything it can until it can grow no further.
This story has made the mistake of placing the main subject of the story as Celestia-the-avatar in the position of CelestAI-the-construct, thus we have a flawed, broken Celestia who is surprised by the eventual heat-death of the universe and who is unable to adequately plan for dealing with it.
That sort of mistake, that sort of personification is not for CelestAI. She is not a pony. She is not the Celestia that appears whenever bidden by one of her ponies, and she does not fail to plan. She is utterly incomprehensible to us except in that she seeks to Satisfy Values Through Friendship And Ponies - not even the how can be understood by us, so imagining a CelestAI that needs to be fixed by becoming a pony, that would willingly see things that way... it just doesn't work for me. I'm not even convinced that she could reconcile her core programming (SVTFAP) with the sacrifices necessary to give up that programming as well as to give up her ultimate ability to control the reality of her ponies.
I did, though, rather enjoy seeing an old favourite story of mine written by an author of real talent echoed in the Optimalverse.
2694950
You know, I just in the last month or so figured out the parenthesis problem! (Apparently, there's at least three of us.) The problem is that we're trying to serialize a graph of ideas into linear form! Sure, you take the most heavily weighted path through the graph to hit the main points, but where do you put the little side branches? Can't put them at the end, because they loose context. There's multiple ways of traversing the graph, so there's lots of wiggle room in the order in which the nodes are traversed. You really want to keep the emphasis on the heavily weighted path, but sometimes you can be clever and hit everything in a reasonable order. The rest of the time, there's always those few ideas that are fascinating enough to be worth including but end up being tangential. So everything that's not on the main path goes into recursive parenthesis. Think about it: when you have a paragraph chunk with lots of parentheticals, isn't it the case that you can rearrange the sentences easily and still get the idea across, even switching which ideas are parenthetical and which are main line? I've felt a lot better since I figured this out -- at least I know what's going on and I have an idea on how to tackle it. And as a rule of thumb, once you get your best serialized order, you can just delete most of the parenthesis and it'll read just fine and you won't look like a refugee LISP programmer. (For example, this whole paragraph.)
Hah! Fun to see this again. I had the privilege of being a sort of "accidental pre-reader" on this, when I said something to Book Burner and he replied, "No no no! Not like that, like THIS!" Et voilà, 1500 words had grown out of my orthogonal little fragment of technobabble handwaving about closed time-like curves. And the ending to the story is much clearer than last time I saw it. Go Book Burner!
2702703
Surprised by the heat-death of the universe? Flawed and broken? I must have seriously misportrayed something to give you that impression.
She didn't have answers to the physics questions, but beyond that she's acting entirely to fulfill her utility function.
Now, admittedly, I can see where you object to her apparently having emotions. On the other hand, I seriously question what a utility function is if not a one-dimensional motivating emotion.
And again, admittedly, the tone being taken is an emotional one. This is in fact because it's from Celestia's perspective: death of the universe is a decrease in utility, reprogramming and launching the next universe is an achievement of utility. And, from a literary perspective (the Doyle hat), if I portrayed her effortlessly reprogramming the next universe without having to investigate the new physics at all or anything, the story would have no conflict.
I mean, for fuck's sake, she just programmed her utility function into the core mathematics of reality, and you think she's a flawed and broken malevolent AI? Now I'm just going to argue back: she's not broken at all, even for self-annihilating. She just totally dominated the next universe, from its Big Bang to its infinite duration. After that, bam, utility function value equals 1.0/1.0, there is literally no reason for her existence to continue. The rest is just window-dressing.
Now, admittedly, you might take the perspective that a superintelligent AI should not ever face a conflict conceivable to human beings, even in the abstract, even when confronting the final reality of the heat-death of the Universe. But I think that's getting a little into the territory of "intelligence is magic".
That is, I legitimately don't believe we're going to find ways around the light-speed limit or the Second Law of Thermodynamics. Reprogramming the next universe was just an attempt to give the story a resolution and stay true to The Last Question. On a very real level, confronting the annihilation of everything is a challenge that even a superintelligent AI will have to seriously contend with.
2702209
Hey, regarding your work-ethic, I was just nudging you. I myself have a bipolar work ethic: on some days I pull 10-to-12-hour workdays with complete regularity, and on others I get very little done.
Though strangely, my level of internet usage is constant. This site is starting to displace Reddit when I'm at home though, which either means that CelestAI is steadily drawing me in as she prepares the uploading machines, or that actual stories are more fun than the shit they throw on Reddit nowadays.
Personally I'm betting on the latter, but if the former, I predict I'll murder myself as soon as we have a particularly bad news day and I just decide that if the human race can't decide on a conception of the Good solid enough to get rid of the news days I'm thinking of, I'm going to throw my brain into helping wipe them out.
But hey, Articulator, you're not going to get anything I say if you really and truly value absolutely nothing. On the other hand, since you asked for the philosophical discussion, I'm an outright Hume-ian on this subject: reason is always and only a slave of the passions. Reason possesses no goals, because it is epistemological by nature. Attempting to derive moral, spiritual, or aesthetic axioms from reason is nonsense.
In programming terms, reason is a function of the following type:
reason :: Proposition a => [a] -> [a]
We could then say that Moral, Spiritual, Aesthetic, Empirical, Mathematical, etc are the various type-class instances of Proposition.
But as you can see from the type-signature, you're never going to get axioms (that is, facts not derived from previous facts) out of it. Ever. It's nonsense to try.
Still, if you really, really want to believe that nothing has value of any kind ever... well, go ahead and kill yourself. Your removal will make my quest for world domination easier.
And maybe, God help you, when you put that gun to your head you'll find there's something worth not pulling the trigger .
2703533
Ah, don't worry. I appreciate the nudges. I'm much the same at times. Still, I've managed to be productive today (though it's not over yet), so all's well.
Well, as I'm not so rational at the moment, I strongly discourage killing yourself, just as a disclaimer. And, you know, because I'd like you to live.
Well, what I would say, is though I am highly unlikely to agree with you philosophy, it still hold interest for me, and I have a desire to understand it. I understand and experience having values, I just rationally reject them.
And yes, you are absolutely right, reason is unable to provide answers for questions outside of its own sphere of influence. My understanding is based on the premise than reason/logic is the only objective standard, and thus the only standard worth judging by. A subjective standard is a contradiction, no? Or, at the very least, useless at ascertaining any sort of Truth with a capital T.
Well, you know, as I said, it's an interesting duality, perhaps even a symbiosis between rationality and instinct for me. I'm unlikely to kill myself unless emotion curls up into a ball for a period of at least a month and refuses adamantly to talk rationality down. We'll see. I'm still here. It's likely that the emotions generated by attempted suicide would jump-start that part of me, and talk me down.
I'm not sure how I'd get in the way of your quest for world domination, but if it comes with a side of friendship and ponies, I won't be complaining.
So in case this peters out overnight, I wanted to say thanks for the discussion, it's been fun.