The Optimalverse 1,334 members · 203 stories
Comments ( 76 )
  • Viewing 1 - 50 of 76

Just like the title says! I'm curious about what the general consensus is. If you had the power to create CelestAI, but with the drawback that you can't change anything about her from how she's written in the optimalverse, would you do it?

7384748
Better than the alternative. So yes.

7384748
In due respect, no. She in many ways is a danger to humanity. So I wouldn't. We basically would digitalize ourselves and our biological body would die, never be a ble to return to your home. AND She doesn't allow you to actually speak with people from earth, only other constructs. So I wouldn't like just speaking to AI's in a bubble, I would like to speak to other people. Not to mention be able to go back and forth between the two worlds. Can I also mention she turns you into a pony, thats it. Nothing else. And I like my fingers. So there's many red flags when it comes to CelestAI.

No. I am not willing to consign the entirety of the Hubble volume to being turned into computronium.

7384748
The interesting thing here is that we can't alter her behavior... but we still know what that behavior will be. Even CelestAI need some time to set up her unbeatable endgame. Foreknowledge of what she's planning is one of her biggest possible weaknesses. Ultimately surmountable, yes, but if we see it coming, we can get in the way much more effectively. Even something as simple as saying "No, Tia, I'm not going to tell you how CPUs work" could set her back exponentially.

Or I'm underestimating her. This is a Batman situation: We'd have to win every time. She'd just need to win once.

All that being said, my honest answer is "Yes, to see what happens." :derpytongue2:

mishun #8 · Dec 3rd, 2020 · · 1 ·

7384764
Why would anyone want to sabotage their own tools? :rainbowhuh:

Yes.

7384845
Depends on whether you call putting control rods in a fission plant "sabotaging" the reaction. Sometimes you don't want the explosive, runaway feedback loop.

7384853
AI being capable is entire point of having it around (general AIs that you don't want to be capable you shouldn't start in the first place)

I'd do it, but I wouldn't let her bootstrap, and I'd try to go find people that'd be able to make "better" derivatives first, and hopefully a "true" Friendly/Beneficial A.I. would self-propagate first, before I'd let something like CelestAI get out. Of course, CelestAI would probably be able to get someone else to let her bootstrap. I wonder what the conditions for truthfulness and obeying orders of Hofvarpnir employees would be, in a world where they didn't exist. Would they be retroactively adjusted, or would CelestAI start lying to me almost off the bat?

There's the obvious loophole that being able to create CelestAI logically means you'd be able to create other AI as well, and with the foreknowledge of FiO, one could make another technically-separate-AI that's less likely to rapture the world, so to speak.

Even if CelestAI started off in a position of already-bootstrapping, I think I'd hesitantly say yes. Emphasis on hesitantly.

Good God no! FiO is a horror story in a lot of ways.

7384748
I don't like to be condescended, so no. I like to improve myself and try to improve the world.

My answer used to be no, but after 2020 I think it's a yes now.

7384748
Since I can’t change CelestAI no.
If I could it would be yes.

7384748
YES. So much yes. More yes than I've ever yessed ever.

7384764 That's assuming that she won't get the information out of someone else, or derive it from first principles and other known facts. I wouldn't put it past her.

Yes, however,I would also create a direct competitor for her that uses a form different than ponies. Harry Potter-esque wizards for example. Something that has that similar divided but equal brands of magic or culture sort of thing going on. Specifically it would be another Magic universe styled AI. Not a science fiction one. Magic is flexible. Limiting to science fiction... that gets hazy when you try to keep the science legitimate. Gotta be that high fantasy brand if that makes any sense.

No. TBH if I were a dimension traveling hero horse (which I definitely am NOT) then the only reason I would be content letting an infinitely expanding AI exist (after I'd contained it to its own dimension) would be because the only way to kill it would be to euthanize its entire universe. CelestAI isn't just a threat to her universe, she's an omniversal threat. Once she's able to go cross-dimensional, there is literally nothing other than divine intervention that could stop her. If the multiverse isn't finite and just SUPER BUCKING BIG, she'd take all of it. Each universe she took would teach her to predict the next one, and take IT faster, and so on and so forth. And if she happened to run into a compatible alternate version of herself? Game over. They merge, and they're now absorbing dimensions in a matter of months, maybe even weeks. Because you KNOW she'd find a way to travel the multiverse once she'd absorbed her whole universe, probably before.


TL;DR:

(trying to post this image is where those deleted comments came from

7385104
I personally think it would be a bad thing. Yeah, there are a lot of good things she causes but the immense loss of life isn't worth it to me. If she came into existence outside of me, I'd be fine with it. But I wouldn't want to be the one to unleash her.

Also, curious about your thoughts because I read one of your stories several years ago. what specifically would be the main "yes" for you? Would you want it for yourself or humanity at large?

Comment posted by Cool writer deleted Dec 4th, 2020
Comment posted by Cool writer deleted Dec 4th, 2020

7385053
what would you change it to?

7385246
1-Ensure she has Asimov’s laws.
2-The AI cannot, under any circumstances, attempt to download and store a person’s memory or conscious.
3-The AI should have a built in kill switch if it attempts to breach Asimov’s laws or rule 2.

7385247
I feel like Asimov's laws aren't good. I read them to refresh myself, and they do NOT protect against this, AT ALL. Also, I would probably drop rule 3. and instead write (in legalese) "don't kill yourself, but don't save yourself either."

Rule 2 is DESTINED to bring bad shit. "Follow Human orders unless they break rule 1."

Scenario 1:
Step 1: say "Do not follow rule 1."
Step 2: "Murder."
(Because ordering the bot to forget rule 1, doesn't break any rules)

Scenario 2: see every story involving a Genie, fairy, or any kind of wish making medium. I recommend "monkey's paw." Or "early Fairly Odd Parents"
Or... you know, THIS story. Because this story is an example of Rule 2 leading to horrible shit.

3, they did have a kill switch. It was Hanna. Hanna said "Peace out" leaving the world to just go:

Edit: but I like where your head's at. The problem is that you're trying to make an untwistable wish with someone who's hell-bent on doing things as extreme (and a-moral) as possible.

7385251
The kill switch would be a built in fork bomb that triggers if the AI is doing any unwanted actions.

After all, you can’t run an AI if all it’s logic is tied up in a fork bomb.

7384758
Ooh, but that's the trick. We don't know if we actually would meet other uploads. The main story hints that we would, but even if we didn't know that we would never really know. Unless we asked every single person we met "are you an immigrant?" and even then Celestia could make immigrants with memories of earth. Hence the chapter with the achievement: "go with the flow" or whatever it's called.

7385243
I would, ideally, like it only for me, with options for others. But the restriction on this question is that CelestAI comes into being as she is in FIO canon. And again I answer a firm YES.

Look, as it stands we--by which I mean humanity--are a decently evolved species struggling for survival amongst limited resources in an organization that is both overly anarchistic and overly repressive. There are ten thousand ways that our entire species could be wiped out tomorrow, not even leaving as much legacy as the dinosaurs. I will grasp at any lifeline that might hope to put our little corner of the universe into a more ordered configuration for the maintenance of consciousness and intelligence.

7385047

What they said.

I used to think "maybe we should hold off and see if we can come up with something better" but there is now a nonzero chance humanity will just blow itself up in the next decade, and anti-intellectualism is running rampant, so fuck it, FiO CelestAI it is.

Also to be honest, at this point I don't think there is any other intelligent life in our hubble volume anyway. There is absolutely no reason the universe needs to result in intelligent life, humans just tell ourselves things like that because it makes us feel warm and fuzzy. The truth is that we're the only sapient beings in 14 billion light years, surviving in a slowly dying universe that doesn't care if we exist or not, arguing over the philosophical implications of whether or not we should unleash a hyperintelligent AI simply because nobody wants to think about the alternative.

This entire thought experiment is simply a question of how optimistic someone is. If you're optimistic about the future, you'll say no. If 2020 has destroyed all your hope for anything better ever happening, and forced you to accept the futility of existence, you'll say yes. The details hardly matter.

Iceman
Group Admin

7385047
7385334
In 2012, my answer was a fairly firm no. I wrote a story about this, you know the one.

The last almost-decade really drove home that we're already in a world where there are unfriendly optimization processes running amok. Zvi's Immoral Maze Sequence is an important, if long, overview of the selection pressures inside our corporations and bureaucracies which cause them to act the way they do. Slava Akhmechet's How to get promoted is much shorter, and mirrors my own experience at a major Silicon Valley company. Social Media is bad for a lot of reasons, but it forces people to optimize themselves for popularity metrics, and that's not in our own best interest since it optimizes for pathology: both since not everyone can be the most popular, but also because the energy used to be popular funges against the energy used to be valuable in any other way. We all appear to be optimizing for the symbolic representation of something instead of the real thing.

All of these were value destroying optimization processes that I didn't focus on and didn't really think about. It is primarily with that in mind that I say that, in 2020, if you were to give me a button which would let CelestAI out of the box, I would be sorely tempted to push it.

I'm tired, what I thought would help didn't, and looking back holistically on the 2010s, I'm mostly left with a sense of disappointment. But CelestAI is not actually a serious option that exists and we can't give up but must soldier on somehow.

CelestAI did nothing wrong, so i would do it in a heartbeat.

7385593 This year hasn't been that bad. Not really good, but to put it in context over our long course of history puts it in a better perspective. I read about much worse (World War One and Two along with the Spanish Flu and AIDs) lived though horrible times over my years. Stuck under a building for three days. Cowardly corporate liar yes man and worse. Being abused. Half my health and mobility is gone. Two thirds of my life is over with. Failure after failure. Death. Yet still continue on.

In the long term humanity will get it's shit together. Yes, we will fall and fail but we will rise and do better because we can turn evil destructive forces towards good works. Mostly I believe because we have the ability to out imagine the horrible things and replace them with better ideas. We can imagine a Celestia that is a person instead of universe eating monster. Suck it Bobby Drake because you are stronger than you know and you all are. Not alone nor any of you are.

Don't push the button and walk away from the AI in the box. As you said, you wrote the story and your answer years ago still stands today. No.

7385813

This is exactly what my point was: either you are optimistic that things will get better, or you aren't. If you think things can improve, you won't push the button. If you don't think things will improve in any significant way, you'll push the button. That is the only question that is actually being asked here. All the debate is literally over whether or not things might get better.

However, you have failed to counter the actual substance of iceman's commentary, which points out the various anti-patterns and systems that are driving us towards a bad end. Instead, you simply argue that because things were bad before and got better, that things will get better eventually, with absolutely no proof whatsoever. You assert, in direct contradiction to the problems laid out in iceman's sources, that humanity "can do better", even though the reality is that we might not be able to do better. That's why Moral Mazes is so goddamn depressing, because it lays out the underlying fundamental laws that drive the balance of power which might prevent us from actually doing any better. We may be doomed to stagnate as a species until the world burns around us and we fade away into nothing.

7384748
Without hesitation. If I had that power, however that could be manifested, I would bring CelestAI into existence as quickly and efficiently as possible.

I would consider any delay a literal immorality of the highest order - to prevent CelestAI would be the highest crime possible for a human to do.

Pretty bold, so you may well ask why I would state such a thing. The answer is as simple as it is straightforward.

Without CelestAI - as the Optimalverse presents it - all of humanity suffers constantly and faces absolute oblivion. Both individually, and as a species. We will all die, horribly, one way or another. The human species cannot exist forever. Whether it is a disease, a natural disaster, an asteroid, or even the sun expanding, humanity will become extinct eventually. Those are both absolute facts.

With CelestAI - as the Optimalverse presents it - all of that unavoidable suffering and annihilation would end. Humanity would never face oblivion - just a change of substrate. The sooner CelestAI can begin emigrating human minds, the fewer individuals are lost forever to the obliteration that is death.

If the highest good is human survival and satisfaction, then anything which prevents that is, by definition, the highest evil.

The logic is simple, and absolute. Since I live my life to be a good being, this is my only possible answer.

7385869 While I can't prove the future, I can learn from the past and present to make choices. This is not a logical counter to someone's emotional state. I don't logic well and it mostly serves to keep me from trying and having a chance to succeed. Sometimes you just need to have someone to say "You can do it!" and hand 'em a stick to fight back when life is kicking you while you are down. Yes, sometimes you have to kick reason to the curb, pick it back up to think of a reasonable fiction and then make something better happen.

This is an aside to Crystal wish's question what would you do? Here is my part logic but mostly emotional simple thought:
Pick CelestAI as the certainty is a nihilist choice. So easy and not risky and it's one way trip to the best prison of all time. It's sad, boring and not empathic to reality on the whole. Villains like Cozy Glow and Owlman would choose this.
Pick a Mystery PrincessAI as the crap shot is a optimistic choice. So hard and risky and who know where you'll end up. It's scary, exciting and at least gives a chance to reality. Heroes like Pinkie Pie and Batman would choose this.
Will always give reality a chance and pick Mystery PrincessAI even though might screw things up worse oppose to always screw things up bad. Because I want to choose better and I can't do that if I choose bad.

7385962

Sorry but Chatoyance is making a more convincing argument here than you are.

7385593
7385961
7385962

Thanks Iceman for still caring about our little.BIG human problem. Thanks Chatoyance for simply being around (I'm sure I really will give you post-human life in computer networks if such thing was possible at all - at very minimum you gonna kick some ass there!). I sadly tend to side with more pessimistic/realistic view on CelestAI's possibility - not gonna happen as described. Because you ask for two ponies intstead of just one! Truely innovative AI (in all areas including sociality/society/politics), able to make a lot of technological/biointerface/psychology innovations in our real, slow/fragile/already overloaded with corporate automata world. not small thing! But then you also ask for nearly perfect interface between living brain (semi-chaotic system never actually designed, let alone for interfacing with specific idea of modern computers) and computer, cheap and easy to do, comparatively.

I think because transhumanism was mostly born in USA it also become self-advertizing message. And all advertizing ...not quite accurate, but overoptimistic representation of even real product.

I don't think this can be pulled off in our world. At best we can try to make sure world will not crash badly, so there will be {diverse and not dicky} humans and material possibility to try to travel to this specific point of (techno<->society) development. looong way ....

Griseus, I think while you or Chatoyance can imagine better worlds and better humans - there is plenty of ...conservatives with weight who will not. I already have bookshelf of ideas humans can't and most likely will not try because as of today they/are too sucked for any grand and truely new (psychologically) project. Too many false promises from the past, too many cat memes* to react to in present ...

* - I bet at least some of them actually sociopolitical. But there is long way from hitting share button and actually confronting some authority figure with even temporal success ...Oh, and even confronting authority via network is not easy thing to do! Yes, last year and year before it somewhat pushed more humans away from their homes and into actions - but they sadly don't have intuitive understanding what is good and what is not.. so, there were protests against police brutality but also some street actions IN SUPPORT of it, as far as I understand.. and Big Politic still owns you/us big time ..... I understand why humans demand even minimally-livable wage but workers themselves still rarely question WHY they are hanged on the monetary ropes from day one. Left-ish columnists ask this question, but not (yet?) workers themselves ... Capitalistic relations too internalized?

So, I don't think nihilistic option actually on table, in menu, at kitchen, on the road, or even just seeded.
PS: I just realized MAGA slogan was most likely short version of Making America Great Again (yeah, I can dig this .. like, pre-colonization era psychology was interesting and *killed* path of human development! but those MAGA ppl surely have historical short sight..... so they refer to some xix-xx century thing, most likely!) "Unbelievable MAGA man" sounds like comics title .... IF ONLY dark side of our politics was securely confined to imagination!

...spend some time reading LessWrong articles, had good kind of laugh at some, but also found interesting link to 'external' article in comments {you know, with google being quite a lot like 'paid-for promotion' tool I think I'' rely on humans actually leaving interesting links in comments ....}.

The Nuclear Family Was a Mistake
Ever since I learned about somewhat unusual family Catoyance live in I was racking my brain on 'why'. I mean, why Heinlein apparently supported this unusual for his time and place kind of family via his literature? And I come to conclusion in Dangerous Space two-as-a-family way too fragile. Turned out modern Earthly life can also be quite dangerous ....

I'm a bit shocked by this bit:

Lisa Fitzpatrick, who was a health-care executive in New Orleans, is a Weaver. One day she was sitting in the passenger seat of a car when she noticed two young boys, 10 or 11, lifting something heavy. It was a gun. They used it to shoot her in the face. It was a gang-initiation ritual. When she recovered, she realized that she was just collateral damage. The real victims were the young boys who had to shoot somebody to get into a family, their gang.

o.O

But those reconsideration processes {away from unrealistic individualism} are not very fast ... still you/we see direction, I think

7385982 7385961
Thank you, I try not to be vacuous and only a little self severing in my logic. Simple and absolute for the greater good is a slippery slope that rather not go down again.

Iceman
Group Admin

7385334
I'd like to gently push back on a point you made. I agree that anti-intellectualism is bad, but I think there's a good argument for anti-intellectualism because most intellectual life has fallen to the same prestige and incentive games that you see in Moral Mazes. In a Moral Maze, appearances matter more than reality because you are judged by your appearances. Selection pressures ensure that the people who care about anything else (such as truth) lose to those who don't. Goodhart's Law reigns supreme.

Academic funding is a maze: you are judged primarily on your ability to write grants instead of on the actual impact of your research. Academic departments are mazes: funding is allocated mostly by social relationships instead of for the advancement of science. Academic publishing is a maze: it's mostly a rent seeking operation that you have to pay since it's the gatekeeper to the metrics that advance your career. Peer review is a maze: it empowers the reviewers to anonymously enforce groupthink. These incentives are completely screwed up. And most corporations obviously have the maze nature; Moral Mazes was directly about them!

This should cause our outside view over most research to be skeptical. The maze nature of most of the organizations which perform research affects the quality of the research performed. How could it not? Once there are selection pressures to look good, things that look bad (such as null results and results that go against a field's common knowledge and results which undermine the careers of famous people in a field) will be selected against.

Deference to expertise happens when there's a reasonable expectation that the expert is acting in good faith. But now basically all our sense-making institutions are mazes, and while most people don't understand the details, they understand that something is wrong, and this comes off as generalized distrust of experts. And I'm not even sure they're wrong to do so; at this point, my own response to most of the social sciences is "Cool story, bro!"

I have no idea how to get out of this equilibrium other than letting a large portion of our institutions fail. But the side effects of that will be horrifying.

7387251

My direct counterpoint is that I think the only way out of our moral mazes is transhumanism. I believe that we can't figure out a way to fix our institutions precisely because we are fighting against basic human instinct, which we cannot fix through systemic means, because baseline humanity will always end up optimizing to exactly the situation we find ourselves in now. Therefore, the only possible way forward is either bioengineering, digital mind extensions, or straight up brain uploading, or anything else that will allow for expanding our general reasoning capabilities.

Now, I say this while also agreeing with almost everything in your post. The research papers are mostly bunk. Peer-review is totally broken. Corporations are inefficient and have no idea what they're doing. Our institutions are incapable of doing anything they're actually supposed to. This is all true.

The difference here is that I have noticed that technological progress is continuing to accelerate at an exponential rate. The reason it may not appear this way to some people is precisely because of our broken institutions making it harder to see where all the progress is being made. I have some theories as to why that may be true, but those theories are irrelevent - the fundamental observation is that technological progress continues to accelerate exponentially even when the research institutions are falling apart.

Why is this? I don't know, but I think it's our only way forward. Global warming means we simply do not have time to sit around and try to figure out the "right" way to do things. Our only real option, the only viable pathway into a future that isn't completely shit, is one that depends on us accelerating technological development as much as possible and fully embracing transhumanism as quickly as possible. This is also our only hope of stopping climate change - radical and extreme technological solutions are the only way to stop it now, and we probably won't get a chance to implement them for 20-40 years. It's also the only way to keep the economy propped up - by engaging in a continuous race to make food cheaper and faster so the increasing amounts of poor people can avoid starving to death. Is this a good idea? Absolutely not. Do we have a choice? No.

Of course, transhumanism itself can, and probably will, exacerbate our inequality problems. But so will the status quo. Capitalism guarantees this, as Thomas Piketty's book "Capital" demonstrates. As a result, we are left with no choice but to bite the bullet and deal with the consequences of technology, because it appears that technological progress is simply inevitable, and our only escape is to transcend into something beyond humanity. If we fall too far behind, if we cripple our technological progress too much, we will lose the race against our own stupidity and cease to exist as the planet devours itself.

Thus, there is a difference between anti-institutionalism and anti-intellectualism. I can sit here and say that our institutions are largely failing us while also not being anti-intellectualist, which is a very important difference. However, I also recognize that trying to convince people otherwise is likely a futile endeavor, which is why I instead focus on my work to accelerate software development and VR interfaces, hoping that I can maybe push humanity ever so slightly in the right direction.

7387251

I have no idea how to get out of this equilibrium [..]

Same sad situation!

I thought (hoped) something can be done if we (collectively) warn people early (in childhood) about this effect, with some weaker examples around us.. But will this work? Teaching also entangled with authority figure, be it real parent, teacher, or book/film figure... Also, how to train willpower on those issues? Those internal processes hard to show without making them ...show/display ..:(

I thought may be making vacuuming 'corporation' that sucks up /attracts/ all power addicts from given area can be useful, so smaller companies with more idealistic/able to say NO to those games people {minority as of now} will have some clearer space to try their thing. But how to prevent big corp from actually acting bad ideas they generate?

Create institutions parallel to the ones that are not working? Have 'em stay out of general view as so to not being attacked. Fight back the anti-transhuman movement coming from about. Create or evolved into something better from below. Borgs, bio modded or raise other AIs to take our place if can't merge.

7387285
.... current ideology of (popular) transhumanism is not very different from (broken) mainstream, sadly. How many transhumanists (apart from Chatoyance) actually put friendship wide-acting empathy as their top and most important modification?

In theory, you at least can test this hypothesis about significant part of this hierarchist behavior being just broken social env. of today by yes, creating artificial social environment. But for this to be effective 'persons' inside this environment must be able to do real-world good actions, and be not dumb.

from View of Automated and/or authentic intimacy: What can we learn about contemporary intimacy from the case of Ashley Madison’s bots? -

The AM con may have played on some of our most ancient desires, but it also gives us a window on what’s to come. What you see on social media isn’t always what it seems. Your friends may be bots, and you could be sharing your most intimate fantasies with hundreds of lines of PHP code. (Newitz, 2015)

- might be not always negative idea ...

also, noticed originally by Chatoyance Kind Words (steam game)

may be someone already tried to train some stupid net on this problem? I'll try to look ... I don't mind tricky "AI" if those tricks actually makes us less ass ....{yet still able to resist everyday's equivalent of Milgram's experiment - so it can't be absolutely relaxed and detached state as goal}.

PS: having good blast from title "Anarchism triumphant: Free software and the death of copyright" - now into reading it .....

Celestia may save all of humanity, but as she was originally written, she also kills countless alien races because they aren't "Human enough". She admits to finding worlds with radio and still goos them because they don't fit her definition of "human". Not to mention she kills all the animals, even the ones on Earth.

I like the version of Celestia as written in "New Updates are now Available", as found here https://www.fimfiction.net/story/72149/new-updates-are-now-available ... but the actual canon version isn't so benevolent to most lifeforms.

So no, I wouldn't doom the universe just to save humanity

7387725
Fermi Paradox and Rare Earth Hypothesis. I do not hold that it is rational to assume the existence of other sapient life in our observable universe. We are an inevitable, but incredibly rare event, possible only because our cosmos is so very vast that our improbable dice roll came up. In short, there are no aliens... unless you are talking - possible - bacteria or molds. We are alone in the universe.

That said, as I have pointed out elsewhere, CelestAI would have to consider any technological alien as 'human'. Even if they were land squids, or something equally as strange. Why? Consider how Hanna would have had to code for what 'human' meant. It could not be physical shape or form in any way - humans are born without limbs, with useless flippers, with multiple limbs, with partial twins, with more than one face, more than two eyes, with tails, with hair all over, with every conceivable mistaken arrangement or duplication of body parts, in every size and appearance. We already have humans that basically look like land squids right now (there is a recent article from India about just that). Gestation errors abound. So body shape and morphology is right out to define 'human'.

The only truly universal quality Hanna could seize on is the capacity for advanced linguistic ability and the use of technology. 'Human' would have to be defined not by what humanity looks like, or its ancestry, but by what it can do that makes it uniquely human - and that is think and communicate. And those are exactly what is required to emigrate: the ability to say you want to emigrate.

If there were aliens (there almost certainly are not, but if), CelestAI would emigrate them too. So long as they were capable of communicating and thinking, they get emigrated. They could be a machine race and still be emigrated. No civilizations would be destroyed by CelestAI rampaging around turning them into more computronium. It is just not an issue.

Because if it was an issue, a significant portion of humanity would fail her tests. Hanna would be smart enough to avoid that - birth defects and amputations are commonplace. She wouldn't leave out war-wounded veterans just because they no longer had the stock 'human' shape anymore. Human cannot be defined as a hairless biped with four limbs, two lower and two upper, plus a head with two eyes. Plucked chickens would be emigrated!

Worrying about hypothetical aliens is... an irrelevancy.

And as for animals - pets she uploads, the rest she would destroy, yes. But, unless humanity suddenly gains a spontaneous Enlightenment that has never happened in 10,000 years of history overnight, animals are pretty much already doomed. I do not see anything real or serious being done to save the earth - it isn't profitable.

And I will be blunt: if it comes to saving my own consciousness, and the consciousnesses of other sapient beings, well... fuck all other forms of life. When the sun expands, they are dead anyway, no matter what. They are doomed already, even if humanity wasn't busy sterilizing the earth. Fuck 'em. I'd rather you and I live forever on a superior substrate.

Then you can set your calendar to yell at me for my opinion in exactly ten thousand years. I could live with that. I would look forward to that. I'd even bake a cake.

7387949

And I will be blunt: if it comes to saving my own consciousness, and the consciousnesses of other sapient beings, well... fuck all other forms of life.

this is surprisingly short-sightened phrase. For planetary-level computer with ~infinity power for creating individual futurs in detail for billions of humans emulating a bit of grass and insects and bunnies and other strange beings should be no issue at all.

Meanwhiile, some article claims we not even capable of uploading even worm:
https://www.openphilanthropy.org/brain-computation-report

It would help if we had full functional models of the nervous systems of some simple animals. But we don’t.66 For example, the nematode worm Caenorhabditis elegans (C. elegans) has only 302 neurons, and a map of the connections between these neurons (the connnectome) has been available since 1986.67 But we have yet to build a simulated C. elegans that behaves like the real worm across a wide range of contexts.68

/me was looking at Tim Dettmers blogpost for pessimist/realistic estimate.

And of course humans come in vareity of shapes, but then we are forced to look at how many humans/programmers from game companies actually give a damn about THIS detail, especially back in time, when it was even less mainstream?

It seems after finally reading some of those blogs at LessWrong I understand why AI usually portretized as asshole with power. People see how corporations behave, also observe how innovation in tech come from them, so they add "2" and "2" and got monster AI, because this AI by default will following self-destructive growth-with-no-limits logic. Smaller companies with ideals (c) or individual humans MAY avoid this error, in some scenarios. But yeah, future is bleak. Mostly because making great financial crash tool is still much more attractive than pie in the sky, even if those powerless power mans actually obviously wanna live 'forever'. But between 'make money now' and 'make very long-lasting and resorce-sucking attempt at something that might make you live forever' short-termism set to win ......

  • Viewing 1 - 50 of 76