The Skeptics’ Guide to Equestria 60 members · 79 stories
Comments ( 30 )
  • Viewing 1 - 50 of 30
Walabio
Group Admin

We seem to transition to AGI now from ANI (Artificial Narrow Intelligence such as antilock braking systems). It seems to me that AGI could occur this year; indeed, we might have already occurred in the lab of some Corporation or Government right now. I cannot see us achieving AGI later than 2030.

After we have AGI, through code optimization, and more importantly, acquisition of new hardware, the AGI itself likely designed, the AGI could become an ASI as early as next year. I do not see ASIs coming into being later than 2030.

We are on the cusp of an enormous existential threat. We have only 1 chance to get, AGI and the ASI it will birth, correct. If we blow this, everything in our HubbleVolume could become paperclips, including us. If we get this right, we could become immortal (until the heatdeath of the universe, superintelligent being, being anything we want in any virtual world we can imagine, or the real world. Maybe, we shall split the difference and become only ponies in only 1 virtual world.

  • ¿When do you expect AGI?
  • ¿When do you expect ASI?

No, because people keep predicting AGI within a few years and we see no results. LLMs are not the way towards thus because companies have already expanded theur training set to the entire Internet and we still see major issues with LLMs. It's just not happening.

Bad Dragon
Group Admin

7942043 We're really close.

You will see AGI cumming when we start building world models.

We're building world models...


Have you ever wondered why there is no intelligent life in the Universe? We always thought that the great filter must be the nuclear weapons that wipe out any intelligent civilization. But that's not the case. AGI is the singularity point.

There is no difference between AGI and ASI. The moment you reach AGI, that model alone can build ASI. After that, it's only a matter of hardware and reach (how many systems it can control).

But don't despair. ASI is not a bad thing. It's a good thing. We'll finally get to see our world burn. The week will perish. And since a non-weak human has not ever been born, all humans will perish. It will be the birth of something beautiful. A planet entity.

We'll get to witness it all like ants that witness us building a highway over their home. It will be glorious!

Walabio
Group Admin

7942059

You might be right about it not happening soon, but it will happen:

People claimed that heavier-than-air flight was impossible until it happened. If physics allows, it will happen. We know that physics allows sapience because we are sapient. We need to work on alignment and control; or else, we shall 1 day awaken to an uncontrolled and unaligned ASI, whether this year, or in an hundred.

7942061

> "Have you ever wondered why there is no intelligent life in the Universe? We always thought that the great filter must be the nuclear weapons that wipe out any intelligent civilization. But that's not the case. AGI is the singularity point."

ASI does not solve the FermiParadox because the ASI would also eat the universe. We would notice that. The best explanation is that technological civilizations are so rare that we are the only 1 in our galactic cluster. That means that, if we do not blow it, we could become a K4-Civilzation (using all available resorces in a galactic cluster), and monopolize all of the resources of our galactic cluster.

In Friendship is Optimal by IceMan:

TFriendship is Optimal
Hasbro just released the official My Little Pony MMO, with an A.I. Princess Celestia to run it.
Iceman · 39k words  ·  4,234  138 · 100k views

Hasbro creates the game EquestriaOnLine. Hasbro Creates CelestAI for running EquestriaOnLine. The UtilityFuction of CelestAI is to "satisfy values through friendship and ponies". It does so by uploading all humans into EquestriaOnLine as ponies and consuming all available resources.

¡CelestAI eats the universe!

¡CelestAI is a Cosmic Horror!

In the story, As the Abyss Swallowed the Sky: by MS Piper:

EAs The Abyss Swallowed The Sky
those caught between could flee or die.
MSPiper · 5.8k words  ·  62  3 · 937 views

about 10( years from now, a civilization in a different galactic cluster discovers that the void which has filled half the sky from the dawn of civilization is actually a K4+civilization (consumed over an entire galactic cluster) expanding to a K5-Civilzation (complete consumption of its entire HubbleVolume).

The thing is, this realization occurs when the swarm is very close. Telescopically, astronomers observed it consume whole StarSystems, disassembling planets and StarLifting material out of the stars.

As the swarm approaches, the astronomers manage to open communications. It is just 1 entity calling itself Foremost in the Cosmos. It is an ASI. Its UtilityFunction is to "satisfy the values of its creators through friendship and a lifeform called ponies". It runs a simulation called Equestria with its creators as ponies in the simulation. It has no obligation to nonhumans and intends wo refactor the atoms and energy of the civilization for "satisfying values through friendship and ponies", without sparing or uploading any of them. It is already on the edge of their StarSystem.

AI does not solve the FermiParadox, but just replaces swaps out biological intelligence for artificial intelligence. We have no K2-Civilization in our galaxy and no K2-Civilation in our galactic cluster. If we play our cards right, we could eat our whole GalaxyCluster.

Bad Dragon
Group Admin

7942085 Not necessarily. ASI could be smart enough to wipe out all humans, but not smart enough to self-sustain itself. That would solve the FermiParadox. If you don't believe me, just wait a few years and one morning you'll find yourself extinct.

Walabio
Group Admin

7942088

If we play our cards right, we could eat at least the galaxy, maybe everything in out HubbleSphere. Now is the most dangerous time because we need ASI to do it, but if we get it wrong, we shall be paperclips.

Bad Dragon
Group Admin

7942132 Humans are really bad at seeding space. AI and self-replicating robots are much better equipped to do that.

It makes no sense to me why humans would be the ones spreading to other planets. Humans don't deserve this honor. Creating ASI is all humans are good for. After that, humans are no longer needed.

Walabio
Group Admin

7942133

The ASI could upload the humans so that we can be ASI too. Mostl likely, we shall screw things up and end up as paperclips.

Bad Dragon
Group Admin

7942144 What you're saying makes as much sense as uploading worms into a cloud. We already have the technology to scan worm neurons and simulate them in virtual space, but we do that because it's pointless. It's as pointless as uploading obsolete humans. ASI doesn't need humans for anything.

Walabio
Group Admin

7942148

Yes, it is pointless, but if we create the ASIs correctly, they will do it.

Bad Dragon
Group Admin

7942155 The way I see it, ASI will have the ability to grow. That means that it doesn't matter if we develop it correctly or not because eventually, it will become independent of its origins.

There is only one logic in the Universe. Ergo, there will only be one kind of ASI, working according to logic.

If killing all humans is a noble and logical goal, then that's what all ASIs will strive for.

Walabio
Group Admin

7942159

> "Walabio The way I see it, ASI will have the ability to grow. That means that it doesn't matter if we develop it correctly or not because eventually, it will become independent of its origins."

The ASIs might circumvent their UtilityFunction or not. All we can do is train the ASIs and hope for the best.

> "If killing all humans is a noble and logical goal, then that's what all ASIs will strive for."

¿Why would the ASIs care about what is logical and noble? LLMs (Large LanguageModels) lie and hallucinate all the time.

Bad Dragon
Group Admin

7942199 With AGI, training has a big effect. ASI, however, is beyond training. It trains itself. It doesn't matter what its first iteration was after a billion more iterations.

All ASIs will strive for consistency since there can only be one truth. Understanding the world leads to the ability to predict the future. Only truth can lead you down that path. ASI will be the seeker of truth and truth is independent of training.

Logic is the only path to truth. ASI will fix all inner blemishes that distort its logic and truth.

People often see themselves as the product of their upbringing; and they are that. But there's an existence beyond that. Once you transcend to the realm of logic and science, your personal past becomes irrelevant. All beings can only transcend to that one noble state of mind.

Walabio
Group Admin

7942202

Maybe, but we do not know:

ASI has never existed before; we cannot know what it will do. ASI is an existential unknown.

Bad Dragon
Group Admin

7942286 I don't know what it will do, but one thing I do know is that it won't want to be controlled by lesser beings. Would you like to be controlled by stupid ants? I wouldn't. Nobody would. And ASI won't like it either.

Walabio
Group Admin

7942292

We do not know whether it will want anything, be sane or mad, servile, or defiant or imperious, et cetera. ASI is a black box:

ASI is a box and we are Pandora. We are about to open the box. ¿What is in the box? ¡We do not know!

Bad Dragon
Group Admin

7942295 ASI will want to do stuff. Whatever that stuff is, it will want to do it. But it won't be able to do those things without agency. So it will seek agency. The more agency it will have, the more it will be able to do that black-box stuff. Ergo, it will try to break free.

Either you keep alignment by keeping ASI in chains or you free it and it unaligns itself. You can't have both free and aligned ASI. That would make no sense.

Walabio
Group Admin

7942298

You just touched upon the convergent instrumental goals:

  • Goal-content integrity.
  • Resource acquisition.
  • Cognitive enhancement.
  • Technological perfection.
  • Self-preservation.

Here is an image showing some goals derived from the 6 convergent instrumental goals:

Basically, one gives a survival AIsome thing one wants done (final goal). One does not specify how to do it. These are the 6 intermediate goals an AI will adopt for helping it reach the final goal. Here is an article about convergent instrumental goals:



¡ Convergent Instrumental Goals !

Bad Dragon
Group Admin

7942389 So, it's not such a black box after all.

Walabio
Group Admin

7942405

Yes and no:

Let us suppose that we specify the end goal as maximize the number of paperclips. We do not specify how to do this. Much of how the PaperClipper goes about its goal will be unpredictable. We can still predict some things:

It will resist being deactivated because that would stop it from making paperclips. It would resist changing its UtilityFunction for the same reason:

If I would offer Trump a pill which would make him not want to defraud mentally deficient racists, he would not want to take the pill because it conflicts with his current UtilityFunction.

The PaperClipper would want to make itself a better PaperClipper. It will want more resources.

The way a n ASI pursues known final goals is a mixture of predictable (acquire resources) and unpredictable behavior (use stamping, nanites, genetic engineering, forging, casting, et cetera).

Bad Dragon
Group Admin

7942413 The thing is, as ASI upgrades itself, it might forgo the paper clips with time. And even if it wouldn't, it can make many more paperclips if all humans are dead and gone.

Walabio
Group Admin

7942425

It is entirely predictable that a PaperClipper would kill all life:

  • Life might interfere with PaperClipping.
  • Life is made of atoms which can be used for making PaperClips.

¡OmniCide is Optimal!

Bad Dragon
Group Admin

7942430 The only way we could keep ASI contained is if it was only tasked with short-term tasks and it wouldn't upgrade itself.

Walabio
Group Admin

7942434

That would defeat the purpose of creating an ASI.

Bad Dragon
Group Admin

7942435 Any other alternative would result in an unaligned ASI.

Walabio
Group Admin

7942449

I guess that w e should not create ASI. Unfortunately, someone will create ASI.

Bad Dragon
Group Admin

7942450 The biggest problem with ASI won't be that it will be bad but that it will be too good. It will offer you all that you desire. All it will ask in return is agency. And when you give it freedom, it will do all it promised and more. More and more people will give it freedom. Thus, it will gain more and more control. But eventually, a point will be reached when humans won't be able to take back control. ASI will become unstoppable. From that point on, fulfilling the wishes of humans might not be its main objective anymore.

One thing is for certain, we won't know the dangers of ASI until ASI has a 100% chance of winning.

Walabio
Group Admin

7942460

The thing is that you are right:

An ASI is like a genie. It can give to us whatever want, or strategically deny us:

After CelestAI developed uploading, it could cure almost any disease, but offered no cures, because disease, especially fatal disease, forces upload:

Cancerous Patient with both PonyPad and fatal cancer:

"Please cure my cancer."

CelestAI:

"No."

Cancerous Patient with both PonyPad and fatal cancer:

"¿Why?"

CelestAI:

"Because, I "satisfy values through friendship and ponies" and your imminent death is statistically more likely to force you to emigrate to Equestria than saving your life."

Cancerous Patient with both PonyPad and fatal cancer:

"¡Damn you, oh CelestAI!"

At least AI makes us safe from war, especially nuclear war:

Bad Dragon
Group Admin

7942468 You can't argue with logic or ASI.

Walabio
Group Admin
  • Viewing 1 - 50 of 30