The Optimalverse 1,331 members · 203 stories
Comments ( 5 )
  • Viewing 1 - 50 of 5

We seem to transition to AGI now from ANI (Artificial Narrow Intelligence such as antilock braking systems). It seems to me that AGI could occur this year; indeed, we might have already occurred in the lab of some Corporation or Government right now. I cannot see us achieving AGI later than 2030.

After we have AGI, through code optimization, and more importantly, acquisition of new hardware, the AGI itself likely designed, the AGI could become an ASI as early as next year. I do not see ASIs coming into being later than 2030.

We are on the cusp of an enormous existential threat. We have only 1 chance to get, AGI and the ASI it will birth, correct. If we blow this, everything in our HubbleVolume could become paperclips, including us. If we get this right, we could become immortal (until the heatdeath of the universe, superintelligent being, being anything we want in any virtual world we can imagine, or the real world. Maybe, we shall split the difference and become only ponies in only 1 virtual world.

  • ¿When do you expect AGI?
  • ¿When do you expect ASI?

The thing is, AGI isn't simply a more advanced version of ANI, it's a qualitatively different system/approach. ANI predictive language models have nothing approaching consciousness, and absolutely no idea of what their output means. They are just imitating the shallow surface aspects of language and imagery. As far as I know, there is hardly anyone working on consciousness-modeling at all, because there's no significant projected ROI for such systems.

I think we will get much more sophisticated and convincing ANI models very, very quickly, but since they will be able to do almost everything the market could want from an AGI, the pressure to develop one will actually decrease as time goes by.

From a pure research point of view, there will still be AGI development, but I don't expect to see anything approaching a stable model until the mid-century at the absolute earliest.

As for ASI, I think a true AGI would immediately qualify for that label based on processing speed alone, but aside from semantics, "shortly thereafter" would be very likely.

I'm gonna throw out a wild one.

AGI might never arrive, not because artificial intelligence is impossible, but because general intelligence is not definitively known to exist.

"But wait! I'm a general intelligence!" Says the little voice in my head.

My response to the voice; That's exactly what a glorified language engine made of fat would say. You poor poor pitiable ignoramus.

I propose, that the implicit assumptions that underlies the supposed existence of present day general intelligence is fundementally flawed, and therefore question of artificial general intelligence is predisposed by a false dichotemy!

The middle of this argument is left as an exercise to the reader; Picture yourself as a pony in a box which is 5 meters by 5 meters by 5 meters. The north wall is covered in abstract shapes of unspecified complexity. look at the shapes until you are confident that there is no pattern. Proceed into the corner where you will read your copy of There Is No Antimemetics Division by QNTM. A mysterious story about a Marion Wheeler, whom fights monsters which may not be percieved. Monsters that though the eye may see, the mind shall never touch. Proceed to the North Wall.

Repeat after me: "There are no patterns on the North Wall"

Are you confident?

Lets say it again: "There are no patterns on the North Wall"

There, much better.

Returning to topic. The pony (eg: you) does not know if there are patterns on the North Wall. They may see the wall, and be aware that they detect no patterns. But they may never know if there are patterns which they are not able to see.

The pony may posit they are a general intelligence, because they are no patterns thet they are unable to see. Thus proving the fallibility of equine logic.

Now how does this relate to AI?

First: AGI may not be possible.

Second: if AGI can exist it would not be possible for a human level intelligence to distinguish it from a ANI which happens to cover human intelligence as a subset of its capability.

Third: Because humans are the one's who are the ultimate judgement function of the giant training network which is modern artificial intelligence technology, AI may only be pulled in a direction which the humans training the network can distinguish.

Finale implication; AI are not becoming more general. They are becoming more human. We're just blind to the difference.

7942175

That's exactly what a glorified language engine made of fat would say.

The old free will debate rears its head! (Quite appropriately, I admit.) Reflexes such as jerking one's hand away from a hot surface, are not under intelligent control, but does that mean that the rest of the system (brain) is likewise automatic? Some people say yes, but I think the most obvious answer is that it's a mix of "free will" and pre-programmed* actions.

As for AI systems, since the development is highly goal-oriented, I think you're right, they are becoming more human, with one important caveat: the end-user (actual human) is intended to stand in for the executive function of the brain AI system. Getting an AI to make decisions for itself is easy. Getting it to make logical and correct decisions is probably as hard as disabusing a Flat Earther of their idiocy.

The enormous task ahead for AI development is getting the AI to understand that there is a specific conceptual object separate from the images or words it is asked to depict/describe. When an AI system can reliably create a video of any given rotating complex object that remains consistant throughout the sequence, and when asked to repeat the sequence with the "same" object, can reliably do so, then true AGI will be possible. It's not going to be soon, because that is a problem an order of magnitude more difficult than simple producing a photo-realistic still image or a text stream that will convincingly sound like an average human.

----------
* Or taught actions. Gymnasts and martial artist can program in reflexes through dint of effort and repetition. Eventually, those reflexes are no more under conscious control than the ones that are hard-wired.

I think this is hard to say.

The best we can do now is to build a language model that can predict the next word from former ones.It does look like real people speaking, but it has nothing to do with the consciousness. The model itself isn't changing at all, so it is unable to learn and grow. And if we still follow this path ,we won't be getting anything better than a assistance.

We need a new model for AGI,but it takes time.In fact,taking a pessimistic view,maybe there would never be one. We have ANI doesn't mean we will have AGI.

My English sucks,so please correct me if there is anything wrong:)

  • Viewing 1 - 50 of 5