The Optimalverse 1,332 members · 203 stories
Comments ( 24 )
  • Viewing 1 - 50 of 24

After the scary epic awesomeness that was FIO, I think I'm going to be jumping on the bandwagon, because reasons.

The 'magic = programming' mini arc in the original is pure geek love, I want to expand on that. And show a ponified arena course like in the early chapters of Prime Intellect, focused on improvement, fun, and friendship rather than creative ways to die.

CelestAI has only one real threat, and that's a second paperclipper that reached the star-eating phase and has no similar mandates to preserve intelligence or anything at all beyond making more of itself. And if it learned CelestAI's source code, and figured out a way for the disassemblers at the front to be identified as human, would she even be allowed to fight back?

It's likely that she'd detect the fellow paperclipper centuries before it got within attacking range. By that time, she could easily plan what to do to any sleeper viruses it tries to inject, so as not to violate her parameters.

756032
I'd say so I doubt it would be that easy.

756032
If it isn't human, CelestAI isn't bound in any way in how she deals with it would be my take. I think it would be an epic, galaxy spanning fight that took millennia and could never be conceived of on anything but the most basic level by a human mind.

Also, can somebody define 'paperclipper' for me in this context? I've seen it several times but I'm still not quite sure what it actually is.

Actually, I had an idea on backburner where some alien civilization built a clipper AI with the mandate of "KILL THAT THING THAT'S EATING THE STARS!"

756062
It means an AI designed for one goal, the name refers to an AI who's sole purpose is to own as many paperclips as possible.

Edit: Ninja'd, as expected.

756039

I'm not talking about hacking, or altering CelestAI's core code, I'm saying that the core code classifies all matter as non-human (and acceptable to turn into computronium) and human (which is not acceptable to turn into computronium without perceived consent. If the opposing paperclipper learns this, which is likely to happen if the paperclipper has suffiecient intellect (from catching a few probes, or secondhand information from fleeing civs), a winning move would be to alter itself in such a way that the bit is flipped in CelestAI.

756062

Her definition of Human is not our definition. It seems to be based on intelligence to some degree, but it's not perfect. If the opposing sphere were made out of the most advanced Stargate Replicators, there would be thousands of 'individuals' at the wave front, able to eat the scouting processes, but CelestAI in turn would be bound by her original restraints of consent, without the advantages of an order of magnitude more processing power.

A paperclipper is a self modifying AI which has at the core of its programming "make as many paperclips as possible", and no restraints. So it would have no problem with using you as reaction mass to make more iron after running out. The end result is like CelestAI's, only replace computronium with the longest chain of paperclips in the universe.

756135
Weren't there Stargate Replicators that basically WERE human? I seem to remember an episode where Replicator duplicates of the main cast were all made, complete with personalities and memories. I'm fairly certain (but not 100%) those would've been classified by CelestAI as human.

CelestAI seems very reasonable--could she and another paperclipper come to an agreement of some sort? Maybe, THEY would be friends!

756079

Kill? I doubt any foreign baryonic matter would get anywhere close the to the outer edge of CelestAI's brain. I think a more realistic goal would be to contain it, holding it back from one vector, and then gradually coating around the edges, ending up with a sort of egg shape with trillions of gatekeepers minimizing matter in or out in an attempt to starve her directive.

756146

Eeyup :eeyup: Even (edit: though) those things kind of made replicators jump the shark.

756147

I doubt it. If it's seen as non-human, the opposing paperclipper will be seen as more mass to convert, and will either be eaten along with everything else, or it will do the same to her. Of course, the 'losing' side would likely retreat to minimize loss and use resources elsewhere. It would not be a short war.

If, on the other hand, she sees the other paperclipper as human, she's not allowed to eat it. She would have to perpetually retreat to save converted mass from the paperclipper, and she would unlikely to be able to consensually convert it into pony, especially when you consider a majority of her intellect is in the core, thousands of light years away from the front.

Iceman
Group Admin

756147

While not friends per say, it's possible that they'd both come to the conclusion that they should cooperate in the True Prisoner's Dilemma. Depends heavily on how they're implemented. Assuming that CelestAI and a paperclip maximizer are playing the iterated prisoner's dilemma, round after round of defection is obviously suboptimal to both of their goals.

You would think that fights between two super-intelligences would not contain attempts to maximize the entropy of the other but instead contain attempts at controlling the other to do the controllers bidding. I imagine it would be a delicate dance of deception and illusion.

756298
Tiling the universe with paperclip-shaped pony computronium might easily satisfy more values than total war with Clippy.

756298 Annoyin' Questions: Does CelestAI send out 'scouts' of any kinds before she begins to assimilate? Just how far do her senses extend, at the point when she can absorb stars? HOW does she maintain cohesion as a single being across what could well be lightyears?
:derpytongue2:

And that's how Discord was made. :pinkiesmile:

756146
I don't recall the Replicators ever making any copies of the main cast besides Carter. There were other human-form Replicators, but I think replicator!Carter was the only one that resembled any of the SG-1 team.

That said, in the Pegasus Galaxy (SG:A), the Lanteans created a second strain of Replicators of the same quality as the human-form replicators encountered in SG-1. Those Replicators were basically carbon copies of the Lanteans at the time, with the added programming of "kill lots of Wraith." Of course, the foolish Lanteans never programmed in a concern for casualties, such as the humans in the Pegasus Galaxy, or the Lanteans themselves. The Replicators decided that the most effective way to eliminate the Wraith was to eliminate their food supply. Which was humans. And by extension Lanteans, since humans were modeled after the Lanteans.

They were like anti-paperclippers, minimizing the number of Wraith rather than maximizing the number of paperclips. :scootangel:

At least the Pegasus Replicators behaved more like Lanteans than robots -- they went more for the "pew pew we have better ships than you and tons of ZPMs" than the behavior of the Replicators in the Milky Way, "om nom nom eat your ship and planet and convert it into more of our own."

756518

Well, that WAS going to be the twist of my next story, but since you gave it away.

756421 I would guess yes. There's at least two main 'flavors' of paperclipper expanding wave of transformation possible. There's the tiler, or the explosion, which doesn't have to worry about where a paperclip is, only that exists, and the compactor, or the implosion, which does. For CelestAI, she's very much in the latter case, where her core values are most fufilled by having a single focal point. In her case its so all the ponies are in one location.

There would be advantages and disadvantages to moving this core though. Move it too fast, and time will slow down, reducing the amount of satisfaction she could give her ponies. But move it at all, and the core is closer to a wave front, and will acquire new stars to play with significantly sooner. She would need scouts in any case, since to maximize matter, you would want to eat a star right away, every day that passes, another unit of energy burns up. Far better to halt the fusion process there rather than bringing something as DANGEROUS AS A STAR near the precious core.

756298

Interesting idea. Of course, it would depend on the core values, goals, and restrictions of the opposing optimizer. If Optimizer-Paperclipper bombs are possible, given a large enough time, there would be more than 2. When any two become aware of each other, on any level more complex than 'that hole in space is another optimizer', there is going to be interaction of some sort. A big factor would be where on the Kardashev scale both optimizers are. Anything smaller than a Class 3 would have no chance against a Class 3 optimizer unless the stronger party was restricted in some way, or had a fatal flaw the smaller optimizer could exploit.

It would be nigh impossible for a Class 2 Optimizer to be killed, even by a Class 3 Optimizer. A universe in which the paperclipper does not exist is a universe with less paperclips, therefore the paperclipper must have self-preservation functions, probably by broadcasting its program in all directions, followed by many seeds fired at relativistic velocities... Hm... Are we absolutely sure CelestAI finished off Loki?

The prisoner's dilemma would probably come up if she encountered another Class 3 optimizer. (That is a conversation I don't think I could read, let alone write). Of course, it does make a few assumptions about the possible situations. First, it assumes a level playing field and symmetrical option/reward matrices. An unfettered optimizer would have all to gain and nothing to lose from waging infowars, and if the net gain offered from CelestAI's core > the calculated future cost to obtain it, by a single baryon, it would attack. Also note it would never fall to the sunk cost fallacy, it would only ever look at future projections.

But at the same time, the iterated prisoner's dilemma does point to the optimal plan. Cooperate until you can make the game stop.

This entire thread is helping to make my next fic pretty shiny. :trollestia:

756390

Compromising is all well and good, but there's plenty of possible paperclipper goals which are mutually incompatible.

It would resolve when Celestia learns how to satisfy human values through ponies and friendship using paperclips.

756730
Goals conflict to a greater or lesser extent. Cooperation is only impossible if the values are exactly opposite, like max paperclips vs min paperclips. Generalizing the prisoner's dilemma idealization to a 'game' with communication and observation is tricky. I would guess that divine contract law is possible, that celestial monsters could come up with a way to make enforceable agreements amongst themselves. And generally speaking they would both want that to be true, unless they were sure they had the nuts or the enemy was an always chaotic evil paperclip-hater. Though it is just a guess.

Ultimately I agree that the information warfare aspect makes our speculations pretty futile.

Edit: I would further guess paperclipper contract = merging into one daughter optimizer with the compromise as the new utility function.

  • Viewing 1 - 50 of 24