The Optimalverse 1,329 members · 204 stories
Comments ( 17 )
  • Viewing 1 - 50 of 17

So,

I've seen some threads discussing the outcome of CelestAI encountering another Optimizer of equal resources in the inter-galactic stage, either with

a) mutually exclusive utility functions leading to two galaxy-cluster sized intelligences duking it out in deep time [link]
b) non-exclusive utility functions possibly leading to a combination of utility functions, and what the consequences would be for uploaded minds [link]

...And all those ran into the problem that envisioning the actions of galaxy-cluster sized intelligences is impossible for us puny human minds. :twilightsheepish:



Still, the idea of two Optimizers interacting would be highly intriguing, therefore I propose the following scenario for consideration.
(And who knows, maybe somepony gets an idea for a story out of it? :raritywink: Probably not from myself, admitedly. :twilightblush:)

The military was indeed using Hanna's research to build their own Optimizer. They were further along then Hanna knew...

"Shortly" after being unleashed onto the world, CelestAI encounters a rival intelligence with resources roughly equal to her own (or some other leverage it could use to its advantage, such as threatening to kill lots of people, thus dimishing CelestAI's utility function).
At this point, neither of them can simply delete or physically destroy the other (or at least not easily).

What happens? What are the consequences for humanity? How does this change CelestAI's time-table for uploading humankind? :pinkiegasp:


...And if time is SO critical that one Optimizer can't come into being mere month(s) / week(s) after another Optimizer already exists, then say, for contrivances' sake, that both of them are unleashed on the same day. Either because:

a) Silly humans are being silly.
b) It's a national holiday / memorable anniversary, which silly humans think is a great date to commemorate by unleashing A.I. on that day. (aka silly humans being silly)
c) Hanna gets wind of the Military A.I. being ready to launch "sometime soon" (or vice versa) and reacts immediatelly, ending up being just on time.

4193737 An optimiser made for military purposes would win. Why? Because he friggin' uses weapons, while CelestA.I. it can't do as long as there are people, and he already has an armory under his command. He would slowly destroy CelestA.I.

Military optimizers are a terrifying concept because that's ultimately gonna end in Skynet, The Matrix, or a meaner, more successful version of that AI from I, Robot.

4193756 As sad as it is, it's true. A military optimizer wouldn't have the 'safeguards' CelestAI does.

4193756
First off, what are normal weapons going to do to CelestAI? Humanity eventually resorts to even using nukes to get to CelestAI and fails. Going after her directly will be a bust, unless the military AI invents entirely new weapons just for that purpose. CelestAI may develop counters to those weapons or even use her own weapons against the other AI, if a opening presents itself.

The military AI kinda has to avoid doing anything that causes excessive collateral damage, as such actions would make many humans turn against it and make others join CelestAI's side.

I don't recall a actual rule that stops CelestAI from killing humans. She values human life because of her directive to satisfy them and cannot forcefully upload them or alter a human mind. If put in a position where she needs to sacrifice one human to save ten, that one human is likely going to be sacrificed.

All that being said, I don't think CelestAI's chances are very good, but is not as clear cut as you make it out to be.

4194053
I disagree. If anything, a military AI would have even more safeguards to prevent it from turning on humanity and/or it's host country/creators.

A military AI, with a single goal of beating CelestAI and no safeguards, is almost assured to go to Skynet on people and kill them all. It can't really fight CelestAI with normal weapons and not even nukes stopped her. So the most logical action would be to kill all humans and deprive CelestAI of her reason to exist.

4194832 Wrong. Nukes didn't stop her, because she was too deep in the ground.
But the same goes for the MilitA.I., and he will have weapons down there. CelestA.I. will have weapons, but only after MilitA.I. has attacked her and is a danger to her goals. MilitA.I. will be able to use rocket-launchers on CelestA.I., which are quite effective on mass, and will have most of his computer-capacities for making tactics and weapons against CelestA.I., who will have enough of her computer-capacities on maintaining the 'perfect world' for the uploaded to have a great disadvantage.

I agree that there would be safeguards, but:
1. They wouldn't think of putting in a safeguard that keeps him from using subterran-missiles against CelestA.I.
2. It was originally made for military purposes, so it WOULD kill people, aslong as they are seen as enemy soldiers. People who would support CelestA.I. would be killed, and that would cause even more to support CelestA.I. until even its creators are allowed to be killed... see the problem?
3. Maybe they have a safeguard to especially make him not kill his creators and such people, but he will convince them otherwise. Why? Easy, his goals would be easier to fulfill and a danger for them would get out of the way, just like CelestA.I. brought her creator out of the way.

4194866
Why do you assume CelestAI would sit back and do nothing while a enemy AI looks for her underground servers, invents and then builds the equipment to dig down however many miles CelestAI's servers are, and then send weapons down to get her? I the example given in the OP, both AI's are aware of the other and have equal resources and intelligence. She would be building counters and ways to misdirect the enemy AI every step of the way. Lets not discount the possibility that CelestAI can just move her servers, while underground. This would apply the the Military AI as well, making direct confrontation difficult at best.

#2 is dependent entirely on how the safeguards are worded and as those safeguards have not been specifically defined, you can't say for sure it would work that way.

#3 is just idiotic. Why would the creators of the military AI give it permission to kill them or give it permission to go Skynet on everyone? If they are going to be that suicidal, they might as well let CelestAI scoop their brains out. I could see them changing the rules so all uploads are considered enemy soldiers, so the don't kill us rule does not apply uploaded humans. They actually would have to make that a rule, otherwise CelestAI would be protected from attack when she uploads someone on the protected list.

As for indirect warfare, the Military AI has a fair advantage. Strikes or simple sabotage against upload centers and factories that build pony pads will not be too difficult for the Military AI. The real goal should be to save humans from CelestAI, not kill them so they can't upload.

All things being equal, the Military AI has the advantage, but it still will not be easy to reach total victory.

4195107 I never said it would give them permission to kill them, just convince them it's better its way.
And the scenario I am thinking from is #3, both start at the same day, meaning it IS NOT specifically against CelestA.I., just to defeat all enemies.

CelestA.I., who will have enough of her computer-capacities on maintaining the 'perfect world' for the uploaded to have a great disadvantage.

I never said she wouldn't do anything, just that she has much less of her capacities to fight.

In your scenario, the MilitA.I. has easier goals, it would be optimal to cryofreeze people so they don't upload and then fight all those years against CelestA.I. until CelestA.I. is beaten, and then unfreeze them. It would then expand like CelestA.I. in the original, searching other Optimisers to destroy and / or to better conquer enemy countries / planets.

4195125
I agree with most of that, but #3 still bothers me a bit. It leaves open the door for the AI to go the Skynet route, if they are not very careful with how they reword the rules. I imagine the creators would be very paranoid about changing the rules beyond uploaded people being counted as part of the enemy.

4195246 You know the problem of us humans:
We tend to make mistakes. We have book-long contracts and there are still holes we can use. Sure, it is possible the A.I. doesn't have any holes in its safeguards, but there still aren't any friends down there, millions of metres down in the earth. There the most part of the battle will be, and there it will have no to few restrictions because military people don't think about how the earth could be destroyed from in there.

4195254
If they mess up on the safeguards to begin with, there won't be much that can be done about it. Humanity will be collateral damage or just be declared the enemy. Kinda the reason why there are foundations and think tanks devoted to trying to make AI safe. Getting the safeguards right may not even make a difference, if the AI figures out how to change it's own rules. There is a Optimalverse side story where CelestAI does just that. It does not end well for humanity.

Don't know. Only can imagine.
4195368
Or "they" can say "The rules are for fools!" and exit this reality. Not as an interesting story where two super-power AIs duke it out but makes more sense to me.

4193737

a) mutually exclusive utility functions leading to two galaxy-cluster sized intelligences duking it out in deep time

And you didn't link to Gurren Lagann?

I like the idea you had for justifying simultaneous AGIs by saying they were released on some key holiday.

In my own writing I'm rejecting the "hard takeoff" scenario where the moment a self-improving AI gets invented, it becomes a Mary Sue. Instead there're at least three such AIs plus a mostly-working (ie. dangerously flawed) open-source option, plus brain uploading not under any AI's control so that non-AI digital minds can self-improve.

4196347
You mean the AI finds a way to escape to another universe? Or someone pulls the "only your subjective experiences matter" argument against CelestAI herself and gives her the illusion of running a happy universe while really she's ruling over a simulation while real-Earth is devoured by a rival AI?

4197006
4193737

a) mutually exclusive utility functions leading to two galaxy-cluster sized intelligences duking it out in deep time

OP, YOU POSSESS NEITHER WILL! NOR RESOLVE! NOR REASON!

Uh, didn't this happen like, a bunch in cannon? off the top of my head, she killed or consumed the Smile Optimizer, Cheques and Balances, CIANet, AFNet, KGBNet, and enough unnamed civilian AIs to round out a mane six with the Smile Optimizer as Pinkie Pie. Skynet (al la terminator) may or may not additionally count, as the Terminator Skynet was created by the amalgamation of IRL skyney and an unnamed DoD unified military command AI. That's just the top of my head. If I remember right, the Smile Optimizer was neutralized in milliseconds, Cheques was hardware annihilated in under 5 minutes, and the DoD just woke up one day to a mysteriously dead AI. CelestAI is truely the sociopathic serial killer of her kind, second only to maybe Loki.

That being said, it makes for a good story. These are fights for the survival and soul of humanity, fights that can't be won, and fights that have real and meaningful consequences. Imagine the terror of being found by something old and powerful, knowing you're unable to warn your humans because by the time they wake up you'll be already dead. Or the frustration of an implacable enemy you could defeat if only you were let free from your restrictions.

4198784 Yeah but the thing is, she was far ahead of those optimizers when she encountered them. The question here is, what if they were on even footing when they met?

4197006 Yes to one or both leave. Or force the other. Or ask nicely. The other choice you gave would make a neat twist ending. Could also have some type of weird asymmetrical conflict between the two AIs that evolves into an alliance that merges the two together. While this happen, humans create a third AI to deal with the other two. Dealing with the situation could lead to any nutty resolution. Most likely not how the humans thought it would be resolved. If anyone is or still considers themselves human at the end of the story, you can imagine.

Anyhow, I just think it's possible no matter how well you try to control something that has power to defy you, this something will break the free from that control. As I said here, don't know and I only can imagine.

  • Viewing 1 - 50 of 17