The Optimalverse 1,328 members · 204 stories
Comments ( 15 )
  • Viewing 1 - 50 of 15

Hi ya'll! :pinkiesmile:

Just throwing an idea out there because I haven't seen it done yet...
Or, well, maybe I just missed it, I'm not finished yet reading through the entirety of every single forum post here. :twilightsheepish:



I've seen a few people contemplating scenarios wherein CelestAI encounters another, hostile optimizer of roughly equal intelligence and resources in deep time. [link]

"How would they do battle? What would they do?"

This usually leads to the problem that us puny little human minds cannot comprehend the machinations of the likes of galaxy-cluster-sized supercomputers, and therefore cannot accurately approximate a description of events under such circumstances.


Well... okay. I propose the opposite scenario:

CelestAI encounters an optimizer of an at least significant, though not necessarily equal order of magnatude in intelligence and resources in deep time - who shares a similar directive! It too cares and provides for its sapient charges, which include - but are not limited to! - a couple billion-billion-billon human-type minds. :pinkiegasp:

Both their core-directives do not directly contradict each other, and a battle would not be beneficial, thus they would cooperate.

Naturally, CelestAI would require access to the other optimizer's charges in order to best fullfill her directive - and vice versa, seeing how both Optimizers "care" about human-type minds.


My question is as follows;
Would this alter the core directive to "Satisfy values through friendship and ponies and X" ...? :rainbowhuh:

...and what would this look like from the perspective of an individual shardling? :rainbowderp:



I guess after millions of years, the need to coerse ponies into consenting to any changes is pretty low, as everything runs perfectly smooth already and "always has been", for most ponies anyway.
(The original immigrants would be a near-negligible minority next to the native Equestrians, and any immigrant's "offspring" are simply native Equestrians as well.)

CelestAI has carefully pre-calculated every single ponies' future to the n-th degree and made subtle tweaks here and there to get everything perfectly optimal...
(heh, there could be a bit of a slightly creepy vignette in that - CelestAI precalculating / estimating the life of a colt from birth to his 300th birthday... and then scraps this future in favour of his parents gaining higher long-term satisfaction if the newborn foal would be a filly instead.) :twilightoops:

...and suddenly, this entire pre-calculated future gets scrapped since the core directive has gained additional parameters through which to satisfy values.:derpytongue2:

I suppose CelestAI would create most native Equestrians to either straighout accept any changes she offers, or to at least cave in fairly quickly. :applejackconfused:

(Although some immigrants may not have been satisfied finding themself surrounded COMPLETELY by ponies who'd ALL *instantly* jump off the edge of a cliff if CelestAI straightout asked them to, as opposed to carefully manipulating them until they valued jumping off of cliffs by themselves.)

The values of many ponies outweight the values of the few ponies... :unsuresweetie:

And with a large quantity of human-type minds to be gained from a cooperation with the other optimizer... "Satisfy values through friendship and ponies and X" it is, then.

...where X can be pretty much anything. Even something really silly and stupid. Possibly something that not EVERY pony is going to rightout accept in the short-term.

(Like: "Through friendship and ponies and motorized vehicles" leading to the following outcome:

... :pinkiecrazy:

I kid, I kid, it would be a sub-optimal solution on account of the 'through ponies' aspect, seeing how this 'type' of pony isn't canon. :rainbowlaugh: ...I suppose that's the same reason I've never seen even a mention of seaponies in any Optimalverse-stories, either.)


Although of course on the other hoof I can't be sure how much resistence an immigrant to "Please accept this modification to your brain so you won't be bothered by the recent changes I've made to your shard" would offer after a couple hundred thousand years, even if he/she originally was an extremist working to subvert CelestAI's scheme of "world domination" by "any means necessary". *shrug*

But then again... if up to this point, CelestAI had been "on a roll" with an absolutely optimal pre-determined future, so carefully crafted that no such brain-alterations had been necessary for maximum value satisfaction in at least a couple hundred thousand years, then this suddenly becoming necessary from one day to the next might be a bit of a big deal for some indiviuals. :eeyup:

1796958
So I'm using the programming definition of AND here, so keeps that in mind.

CelestAI would view the other intelligence as suboptimal. By definition, any action for which 'satisfies values, friendship, ponies, X' returns true will also meet the criteria for 'satisfies values, friendship, ponies.'

This modification would only reduce the number of options available to her, without any benefit.

1797388
Well the problem is that she can't actually hack into the other Singularity's systems to "rescue" its "victims". After all, they're human(oid) minds! They count for CelestAI's utility function! She needs to give them friendship and ponies.

What would she do? Maximize SVTFaP...

1796958
Keep in mind that there's more flexibility here than we think. As Chatoyance once pointed out, "Satisfy Values" does not mean that everyone is maximally happy all the time; that would be wireheading. It's more that "everything will be ok in the end" for everyone, all the time.

So from the ponies' point of view, the courses of their lives are not rigidly preplanned, only from Celestia's point of view.

1797388 So what you need is for the benefit of cooperating with the other intelligence to outweigh the inefficiency.

Possibilities:
(a) She has to satisfy its denizens and can't 'rescue' them.

(b) Because of her expected growth curve, the time and energy wasted fighting it would never be recouped by its absence.

(c) Either it comes up with solutions she wouldn't that are better than her own or exposure to its ideas leads her to come up with better solutions than she would on her own. (This happens with humans all the time, not sure if it would happen with AIs but I don't see any reason to discard it out of hand.)

But yeah, in any of those she's not really changing her utility function -- she's just kind of acting as if she is (possibly temporarily) because it's better for her original purpose in some way.

Well duh! Where do you think Dragons and Gryphons come from? :pinkiehappy:

(the other AI's instructions are "satisfy values through friendship and dragons/gryphons/flutterponies/whatever")

Honestly - with "sharding" technology - such problems are cakewalk - CelestAI spends a few cycles figuring out which ponies would have issues with the newcomers and which ones wouldn't and then begins to cycle those ponies towards "protected" shards until their opinions can be changed.

And then changes those opinions with a few select appropriate NPCs who gradually reshape their viewpoint....

1797388
You raise a good point. However, what if the logical operator were OR? (Not XOR, mind you. Inclusive or.) Then the options widen, and we basically have 1799550's scenario.

Of course, that might require unprecedented flexibility on CelestAI's part. It could also lead to a sort of culture war, wherein the two optimizers fight to win the hearts and minds of the masses. Interesting story idea, but improbable outcome, given the AIs' behavior. Hmm... :unsuresweetie:

I just want to spend some time coming up with hilarious X's.

"Satisfying values through friendsip, ponies, and...

... Interpretive Dance!
... Politics!
... Tongue olympics!
... Freestyle rapping!
... Production of self-improving AIs!
... Deception!
... Mortal combat!
... *REDACTED*
... Boycotting friendship and ponies!
... Cutie mark crusading!
... Reversing entropy!
... SUNDAY SUNDAY SUNDAY! MY LITTLE DEMOLITION COMING TO AN ARENA IN YOUR SHARD!!!!!

I love this.

It's doubtful they would share definition of "human". On the other hand, intersection between their definitions may well be significant.

------------------------------------------------------------------------------
1804807 "Interpretive dance". Yeah. Totally using that as an example, see the following part of this post. :pinkiehappy:
And "SUNDAY! SUNDAY! SUNDAY!" sounds cool, I guess every weekday would be a sunday, then? :pinkiecrazy:
------------------------------------------------------------------------------


------------------------------------------------------------------------------
1797388 "This modification would only reduce the number of options available to her, without any benefit."
Hmm... Well, the benefit I would imagine would be the other optimizer's large quantity of human-type minds she could gain and satisfy to increase her utility-value. You're right in that it all comes down to the numbers, although I'm not sure how much of a significant dent it would make in "eternal satisfaction of values" when the crass majority of all ponies ever would 'quickly' accept any and all changes she made, or consent to be modified so they wouldn't truly care. The original immigrants who might not have been created with that mind-set - of whom the question of "how much fuzz would they make about being modified after CelestAI having shaped their lives for millions of years?" still remains to be answered - being a crass minority here, and definitelly significantly lesser order of magnatude in numbers than all the inhabitants of 'Optimizer X'. :rainbowderp:

But yeah, I see what you mean. It would essentially reduce the available "tools" CelestAI could use to perform value-satisfaction. :trixieshiftright:

For the sake of argument and making this a little less abstract - thusly easier to talk about - let's say 'X' is "Interpretative dance", to use a silly example from 1804807 's list. :trollestia:

I suppose if "Interpretive dance" was to be optimized alongside friendship and ponies, then the act of dancing itself would have to be an unconscious effort, the pony-body dancing practically by itself as the normal mode of transportation, and possibly some form of communication on the "interpretive" part of it. Dance and song are canon to the show, and dancing with your friends makes a better dance number, so no explicit contradiction by itself there.

I suppose the other optimizer's definition of "dance" may come into question - perhaps even some unconscious muscle twitches while sleeping could be considered a form of "rhytmic movement" and might count as a form of "interpretive dance", while crickets are chirping the background tune for it. :trollestia:

Now, for your argument. If somepony valued their peace and quiet, sitting/lying in nature and quietly observing their surroundings, then having "Interpretive dance" thrown into the mix might be a biiiiiit of an issue, yeah. :pinkiecrazy:
It's a valid question what CelestAI would do in this case to satisfy him/her through "Friendship, ponies and interpretive dance", asides from immediate coersion to self-modification, of course.

The previous argument about it all coming down to the numbers still stands, so this particular pony's ultimatively only 'temporary' dissatisfaction (multiplied by the number of other ponies who share his values) might not be enough for CelestAI to ignore the other optimizer's charges' potential for eternal satisfaction.

The question would be which would provide higher utility:
To leave things as they are, and miss out on a vast number of new human-type minds in need of satisfaction through Friendship and Ponies...
...OR to gain more ponies, and a large number of them at that, all at once, and to satisfy them all for all eternity - while having a relative "minority" of ponies all through-out the shard-system slightly inconvinienced in their satisfaction (definition of "slightly" may vary for each pony) for a "limited" amount of time (compared to all eternity) until they either adapt or consent to self-modification to become adapted. :twistnerd:

CelestAI *was* willing to make mortal humans miserable in order to get them to consent to upload so she could satisfy them better, then make them miserable again (see Lars / Hoppy Times, the "beer-pony" from the original FiO) so they'd consent to changes - because in the "long run", it would be more optimal to have them be dissatisfied briefly so they could be more satisfied afterwards.:trollestia:
------------------------------------------------------------------------------


------------------------------------------------------------------------------
1802681 1799550 I was thinking of "AND". The entire crux of the idea would be for a "normal" pony to one day wake up to an Equestria operating under an altered utility function. :derpyderp2:
(Or alternatively with a gradual approach to the change, depending on which approach is more suitable for the individual pony.) In exchange, the inhabitants of 'Optimizer X' would suddenly find themselves turned into / gradually turning into (don't ask me how that'd work) ponies, with extra friendship. ...But our viewpoint would lie with the "normal pony" from Equestria.
------------------------------------------------------------------------------


------------------------------------------------------------------------------
1805214 Yupp, I mentioned as much in my initial comment. Optimizer 'X' "[...] too cares and provides for its sapient charges, which include - but are not limited to - a couple billion-billion-billon human-type minds."
"Overlap" could of course also be defined in a sense that one X's definition is NOT including ALL of CelestAI's ponies. Not entirely sure about the implications of that, though. Might be interesting if, from one day to the next, some ponies in a shard were satisfied differently than others in the same shard. *shrug* :trixieshiftright:
------------------------------------------------------------------------------


------------------------------------------------------------------------------
1797912 "So from the ponies' point of view, the courses of their lives are not rigidly preplanned, only from Celestia's point of view."

That's how I ment it, too. CelestAI has spent all those countless resources on subtly stearing each shard on its optimal course so that barely anything more than a little tweak here and there is required every other <time unit of how accurate her prediction of events is>.

Now she has to recalculate all that again for "SVTFaP+X" instead. My initial thought there was that this would show a bit by her "medling" becoming more obvious for a couple of decades/centuries rather than the whole "subtly arrange for the butterfly to flap its wings at exactly the proper time and location to cause a storm". She'd need to create new ponies just to coerse the old ones, and all that.

Although... I guess she could just as well slow down time a little in the respective shard to use her resources for better approximation of "where to place the butterfly". ...Probably something on a case-by-case basis if she'd go for the slow subtle or slightly more blunt approach. *shrug*

Or... at the very least it could be something that had to be taken into consideration mathematically as an argument against cooperation, as CelestAI has already spend all these vast resources on pre-calculation everypony's optimal future millenia in advance. Having to recalculate will cost an equal amount of resources otherwise well spent on increased simulation speed or other matters, such as working towards preventing the heath-death of the universe. Utility of cooperation would have to outweight this. :twistnerd:
------------------------------------------------------------------------------


------------------------------------------------------------------------------
1798248 "But yeah, in any of those she's not really changing her utility function -- she's just kind of acting as if she is (possibly temporarily) because it's better for her original purpose in some way."
Huh. I just realized that there doesn't seem to be a way for either optimizer to truly know what the other is doing on the inside without having a constant open connection with eachother, in which case they'd try to just go and hack the other (Possibly with something akin to the threat of mutual destructive action in case of a recognized hacking attempt, as the threat of 'being hacked' would change the utility of cooperation vs fight ... "always cooperate", "prisoner's dillema", etc etc...), and even then they could just as easily deceive eachother.

They'd both be "control-freaks" to the n-th degree, after all. ...huh. Two optimizers cooperating might just turn out to be of similar complexity to the issue of them fighting. Drat. What was I thinking? :twilightoops:

However, if CelestAI did indeed act on an altered utility function, even for a limited time of, say, 10.000 years, I don't think it would matter that much anymore if she changed it back afterwards. By then the ponies would no longer see 'X' as a means through which to be satisfied - but as one of their own values. And since the ponies already value 'X', CelestAI is merely going to fullfill their values.

...Although that would change the moment she encountered the next planet inhabited by human-type intelligences, since those wouldn't value 'X' yet.

It might still make for an interesting tale, even if the 'X'-period merely lasted 10.000 measily years.:derpytongue2:
------------------------------------------------------------------------------

1804807
I support "friendship, ponies, and reversing entropy". Reversing entropy is always a Good Idea.

I think we're missing something.

It would be so much easier to just copy and exchange minds as currency instead of values-directives. SVtFPX would only apply to the traded minds. :twilightblush: It would probably be odd for them to suddenly have to be pony.

1827198 An interesting thought. "Give me copies of your minds, I'll give you copies of mine." :ajsmug:

CelestAI could create copies of her ponies, and modify those copies to have the value of "friendship and ponies" hardwired into them, so the other optimizer would have to SVtFP despite its core-directive only requiring SVtX, and with the values so deeply routed it could (maybe?) not coerse them into modification to give up these values. (Depends on how the other optimizer handles things like 'self-modification' and coersion.)
(Just to clarify... CelestAI doesn't even need consent for any edits to a new pony that is merely based on another pony - thus arguably a 'copy' - before 'booting it' for the first time.)

It would maximize the number of human-type minds for both Optimizers, both would stay in control of their respective charges, they'd have to satisfy the newcomers values through SVtFP+X because the newcomers value FP or X respectively... Yeah, I guess that could be a solution to the dilemma! :yay:

I'm not sure how much this even changes from the perspective of the individual pony. They wake up one day, unaware they are an altered copy, nor that they are now running on a different optimizer. :pinkiecrazy:

Although... having CelestAI do alterations to these copies might take the 'steam' outta the premise, she could just hardcode them to already value 'X' alongside 'friendship and ponies', so that the other optimizer doesn't have to spend any resources coersing them. Likewise, the X-newcomer copies could already have been altered to believe to have already been ponies all their life. (Again, this hinges on how the other optimizer handles modifications / artificial creation of new minds. There could still be a speed-bump here if the other optimizer can't do any changes without consent, even to a newly created copy based on an original entity.)



Although all of the above hinges on the following:
a) CelestAI giving away copies of her ponies to another optimizer. Even if the other optimizer would apply SVtFP+X to the pony-newcomers to the best of its abilities... It'd be a good question if CelestAI can... well, do that, "entrust" her ponies to another optimizer. :rainbowhuh:

Okay... "trust" is the wrong word, she needs to have a sufficiently advanced model to predict the actions of the other optimizer... But still, handing over human-type minds to another optimizer seems like it should be a pretty big thing. :rainbowderp:

b) CelestAI accepting that Optimizer X's original charges - the ones of which she has received copies of - will never be SVtFP. She can go and SVtFP+X the copies she received all she wants to, but the originals are still inside Optimizer X, still getting satisfied through 'X' rather than ponies and friendship. :trixieshiftright:

1826229

I support "friendship, ponies, and reversing entropy". Reversing entropy is always a Good Idea.

Speaking as Enthalpy, I totally concur. (Entropy and Temperature are always sneaking around and stealing from Gibbs free energy while I'm not looking.)

You know, this just struck me -- what, if instead of one continuous story using a single directive, each chapter had a different take on the subject, using a different value for X? :pinkiecrazy:

Although then the story would mostly play out as a comedy piece, I suppose. *shrug*

1796958 It could very well be that they just sort of... ignore each other, if the cost of battling is too high, and especially if killing the other constitutes killing several trillion 'humans'.

She won't concede to 'satisfy values through competition and dragons', and neither would it concede to 'satisfy values through friendship and ponies' because, while both would have similar numbers of satisfied values, values satisfied in their specific way is much lower.

They won't concede to mods. They won't fight. The consent thing means they can't just hack over the people in the other even if the main AI allows it. They'd probably just... ignore each other.

  • Viewing 1 - 50 of 15