The Optimalverse 1,334 members · 203 stories
Comments ( 12 )
  • Viewing 1 - 50 of 12

So, I read a bit of LW again ...

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

oh, well ...

There also was newer one in Time:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

While I do not think we are into "fast takeoff" ( for reasons stated in comments, physical reality/manufacturing tend to work *not* at the speed of might, and impose some constrains) I still think slowing down/pausing in many spheres of life actually much needed, because 1) how one can say humans are in control if we can't even apply brakes? (hint of us being already sandwiched between slow "AI"s of corporation and bureacracy, per Stross) and 2) having growth economy if you have era of harder-and harder to get energy ahead does not sounds sane.

I tend to read articles like ones put by resilience.org
https://www.resilience.org/stories/2023-03-28/post-carbon-institute-looking-back-looking-forward/

or Baffler (political magazine), or read relatively high-quality comments because
it sort of hard to enjoy fiction while reality tend to be like this :/

Iceman
Group Admin
Iceman #2 · Apr 3rd, 2023 · · 1 ·

It's really hard to take Yudkowsky seriously these days.

The entire Yudkowskian FOOM scenario that Friendship is Optimal depicts just doesn't look likely. The entire deep learning space is unlikely to FOOM. In retrospect, the update to make on AlphaGo was to moderately reduce the chance of P(doom) because it showed how reliant you are on a reward model.

Meanwhile, it's not like he had a good effect on his former followers. In retrospect, MIRI and CFAR were comically incompetent and really did function like a cult. I've heard so many horror stories come out of that scene now that I regret pointing people towards it, even indirectly.

I think it goes without saying that violent apocalypse cults form at the drop of a hat. I don't think you can be blamed Iceman.

You wrote a good science fiction story. If people do crazy stuff tangential to a related thematic trope, that's not on you.

That said, it is understandable that people are getting rilled up. With generative algorithms finally hitting its mass adoption phase, this signals a massive paradigm shift for human society over the coming years. I'd be lying if I told you that I wasn't a tad nervous about the changes this may entail.

I'm scared. But thats OK.

something that the current version of artificial intelligence has opened my eyes to is just how seductive the concept of slavery is. It’s easy to be all “owning people is wrong!” but if I had a computer that I could just tell to do something and it would just know how to get it done? I could make so much money. I can simplify my life so much.

that must’ve been what the slave masters were thinking back in the day.

7820540

So, I read a bit of LW again ...

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

oh, well ...

In the linked document, Yudowski says (an excerpt):

My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".

Eliezer's lower bound model involves DNA to build proteins that build diamondoid structures?
(doubt image)

I do hope the field can move past Yudkowsky, no one person should see themselves as that important. But for now there seems to be a great deal of work on capabilities and very little on alignment. It doesnt really matter if there is no FOOM, if we can only make the machine do what we tell it, not what we want. And then there is the harder problem of knowing what we want.

We've even seen the beginnings of self-modification and autonomous agency

So far it doesnt look like too much to worry about, but then I think of the classic XKCD:

If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

This take is just next level bizarre. "In order to prevent a hypothetical chance of AI destroying humanity, we must risk a nuclear war, an event which is guaranteed to destroy humanity". And I am not even talking about how this kind of treaty will inevitably be misused by Great Powers as a justification for imperialism (remember Iraq War?)

Will be honest: when it comes to AGI, I consider it being aligned with the current power structures (large tech companies and billionaires) must more dangerous and much more realistic possibility, than a hypothetical "rogue AGI". So much so, that a part of me thinks that we should intentionally build a benevolent paperclip maximizer (like Celest-AI), since that's still a much-much better scenario than what we are actually heading to right now.

I think the particular vector of molecular nanomachines is very unlikely to work.

If you back off on that one aspect, and just run with the general idea that an AI based around a more-coherent, stronger LLM with some other addons and then fed back into itself… and we haven't figured out how to actually control it effectively… that that would be fatally dangerous? THAT is a significant threat. You really don't need to do molecular nanotechnology to just kill everyone. Connecting those two ideas is one of EY's worst ideas.

7820788

a nuclear war, an event which is guaranteed to destroy humanity

Eh, not even close, unless you're into most radical assumptions together, like soot persisting in stratosphere for decades-centuries and Putin and Biden simultaneously going nuts and deciding to nuke every city on Earth. And even then humanity is known to survive ice age...

7820859
Yeah, sure, we wouldn't all die. However, getting a 21st Century equivalent of Bronze Age Collapse wouldn't be fun either - back then it took people 300 years to recover, and that's when a societal structure was "a collection of city-states" with a technological level "we just figured out ironworking". Modern globalized world, with all of its technology and basic societal function heavily reliant on pre-built maintained infrastructure and complicated intercontinental supply chains, would take much longer than "just" 300 years to rebuild

Honestly, in my opinion, we're at the point where it is more dangerous to stop. Sort of like the Manhattan Project; there was some concern that they would accidentally set the atmosphere on fire. But the risk of letting out adversaries get there first was considered equally bad. I think the best thing we could do right now is to run blindly ahead and hope for the best, because that is what the people who want to align their AI with themselves rather than humanity are doing.

I admit, I am terrified. I am afraid of a totalitarian surveillance state like China getting AGI first. I'm terrified of a poorly thought out US military project getting there first. Hell, even of the so-called good actors, I am afraid of companies like Google and OpenAI using AGI to permanently entrench society's biggest problems like wealth and power inequality, or trying to handicap their AGI in an attempt to control it. We are developing a tool that could be used to create paradise, and I am afraid that it won't be used to do so. There's also the fear that AGI will come too late, and that I'll be among the extraordinarily unfortunate group that will be the last people to die before mortality is solved.

I make no secret that the future I want is Friendship is Optimal with bugfixes (ponies optional and care about all minds, not just human minds) or The Culture. Living until the heat death of the universe in a world built for me, surrounded by the people I care about? Yes, please. I'd probably rail against the heat death of the universe as well, but with how far away that is from here, that might as well be forever before it becomes a problem. But the possibility of getting a utopian future is looking more and more precarious, and the idea of halting AI research doesn't actually help.

I suppose I could tldr this as "Yudkowsky can bite me, Keep researching." If we accidentally make an AGI that kills us all, it's not like climate change wasn't going to get us a few decades later anyway.

7820872

Yeah, sure, we wouldn't all die.

That (returning to original point) kinda doesn't look like humanity is destroyed, even under such extreme hypothetical (realistically, anything getting nuked in southern hemisphere is very unlikely, or even maybe other places, like north Africa, Arabian peninsula, Japan, etc).

  • Viewing 1 - 50 of 12