• Member Since 18th Jul, 2013
  • offline last seen Yesterday

Lord Of Dorkness


Deep into that dorkness peering...

More Blog Posts546

Oct
5th
2015

Science Post: Our Coming Robot Overlords · 2:45pm Oct 5th, 2015

Had a slow hour today, and realized there's a subject I've yet to breach outside Sufficiently Advanced itself. And even then, only in subtext so far.

The robots coming for our jobs.


Oh, and on a minor but important note: SA might be getting a restructuring soon to make the horrible current description go away in a satisfying manner.

Short version: I've been flirting mentally with instead of one massive story with three acts, to 'cut' it into three smaller but separate stories. An idea that Adin suggested to me independently recently in the comments, making me take the idea more seriously.

From your point of view the only difference for now should be the story getting a suffix on the title and a new description, but just a heads up.


Still, my thoughts on robotics and automation, plus discussion, behind the cut.


Yes. ALL our jobs. And its not even really hyperbole.

There's simply no reason to pay a human to sit around, fantasying about what they'd rather be doing and pick their nose for 90% of their work-day, when a robot will be giving 100% 24/7/365.

For decades.

With far cheaper maintenance.

And only get better at their tasks as the tech and software gets upgraded.

Its not mean. Its not the man pushing you down. Its not even Big Brother putting its boot on your neck, to stomp forever.

Its just economics doing its thing. The most efficiency, granting the biggest profit, for the least cost.

And humans are simply not cost efficient compared to the alternative that's rapidly gaining ground.

And frankly?

I think that might—I repeat: MIGHT, be the greatest moment in human history. The point were we are finally free of the pointless grind for survival, and are instead all of us free to all in our own ways reach our full potential.

Or, alas, the bleakest. The point where the privileged elite became like gods, while the rest of us gets jack-shit for the rest of eternity. Only kept around because the corporate overlords might need us.

With very little middle ground. Power usually brings more power, after all.

There's just no unringing the bell on this type of tech, as the original Luddites found out. The day people can 'just' download a car? They will. If people don't respect the thousands of hours that goes into making a movie or game, why would they do the same for a 'magic' box that moves? You know, how most of us actually think about cars.

And to what I'm certain will be the automobile industries grand annoyance, it doesn't even have to be theft. Or copyright infringement, but you know what I mean.

Sure, some people will probably always be ready to pay for the privilege of owning a real iCar instead of a copyright free open-source 'mutt,' or to use the classical post-scarcity argument: there's only so much prime beach estate...

After all, there is only so many beaches. No way to replicate them. There's no way that will ever lose its value!

...Or is there?

Few other generas have such a strong central question as the: 'What if?' of sci-fi, after all. It honestly baffles me, how few shows/books/movies will actually press that 'What if?' further on what they're showing. The above example aside, I'm not a huge Trek fan for that reason. For ship after ship of explorers and thinkers, there was just far too many mysteries of the week that got dumped into the Federation's spooky government warehouse.

I even remember the exact episode the flame just died for me. Rascals, or episode #240. The ship discovers a fountain of youth, and aside from a wonderfully thoughtful scene where Picard wonders if he should simply embrace this chance and go back to the academy to also become an archaeologist, the actual episode is far more interested in whimsy~

Pointless waste of an interesting dilemma aside, I digress.

So what if, if you will, money was useless as a survival tool, but still clung around as a 'I want the shiny thing now!' greaser of the social wheels? A high-score for life you can actually do things with aside from just brag about, if you will?

You've probably noticed this in the background of SA already. Blake didn't fill out any travel forms or anything, despite meeting in person being seen as an eccentric indulgence even by he himself.

He just walked out the door of his office, and by the time he'd reached pavement the self-driving car his position, work, clout and yes, privileges if I'm allowed that horrible pun, was simply waiting there for him. None of it even struck him enough to be worthy of thinking about, it just happened.

Granted, #431 is his secretary and second in command, and without her there'd probably at least been a slip or two for a signature.

But, well...

Source.

Case in point regarding our future robot overlords I for one welcome. :pinkiehappy:

Still, there's a reason I made SA post-cyberpunk bordering on pure sci-fi instead of just plain cyberpunk. Call me a sappy old optimist, but I think on the whole, (the occasional ass-hat aside) humans and by extension humanity is a force for good. We don't want stuff like oil-leaks or the hole in the ozone layer, and the accidents aside, one day we'll figure out technology to fix those earlier mistakes of ours.

Will we make exciting, new mistakes? Will people suffer, and sadly die from them? Might they even equal, or terrifyingly enough, dwarf stuff like Chernobyl or the 'miracle cure' status antibiotics got saddled with?

Sure we will! We're only human, after all, and I doubt that spark of both brilliance and utter incompetence will ever go away.

The day we're all just brains in jars forming clusters of Jupiter Brains, I'm sure that Saturn will wipe out half of the solar system by putting 0,0002 instead of 0,00000002 into the matter transformer on the same darn galactic day that Alpha Century #2 figures out how to convert sexual fantasies into usable energy.

We're a bunch of naked apes that poked the one thing every other animal on Earth is terrified off until we figured out how to make more fire. Of course we're going to poke even things like grey goo with a stick, and see how we can make it wriggle around.

But there's no reason to do that without first inventing the hand-held EMP rifle. Or the arm regeneration chamber, for that matter.

We are after all, homo sapiens sapiens, 'The Seriously Wise Man' as Pratchett tongue-in-cheek translated it as.

A name that, frankly, unlike him, I think a title we've earned. Its simply one of those things you have to keep earning, or the right to that title fades.

And how we deal with a future where all the actual labor is done by automation? I think that's going to be the next big test for us, one equal if not greater than the wonders and horrors of the atomic age. What the splitting of the atom was for the twentieth century, the affordable multipurpose robot just might be for the twenty-first.

A challenge we'd better start thinking about right now if we want to be sure we end up with an utopia instead of a dystopia, to put it blunt but to the point.

But I'm sure we'll manage. We just need to be careful and manage the growing pains.

After all, we already know what the alternative is, and that it doesn't work.

As always, thanks for reading, and I hope you guys have found my ramblings interesting. :twilightsmile:

Comments ( 52 )

Short version: I've been flirting mentally with instead of one massive story with three acts, to 'cut' it into three smaller but separate stories. An idea that Adin suggested to me independently recently in the comments, making me take the idea more seriously.

I'm not really in favour. The only thing that ever actually does is give you one story that you need to look for in three different places. It doesn't really have any practical use. The story is still as long as it always was, but now it's an additional bother to keep tabs on.

Will we make exciting, new mistakes? Will people suffer, and sadly die from them? Might they even equal, or terrifyingly enough, dwarf stuff like Chernobyl or the 'miracle cure' status antibiotics got saddled with?

Sure we will! We're only human, after all, and I doubt that spark of both brilliance and utter incompetence will ever go away.

People would manage to suffer and die somehow if we happened to invent infinite food and free housing for everyone that you can keep in your shirt pocket. Change is traumatic. That's just in the nature of things. I've never considered that sufficient excuse to stop progressing, though. Eventually, the world will adjust and we will all be in a better place for it. That's how it has always eventually worked out before and I see no reason why this should be the (new, scary) thing that will break our track record on that.

Still waiting for cybernetics to become cheap, and hoping to hell the Red Revolution doesn't actually happen.

3444650

I'm not really in favour. The only thing that ever actually does is give you one story that you need to look for in three different places. It doesn't really have any practical use. The story is still as long as it always was, but now it's an additional bother to keep tabs on.

I will admit, that's a trade-off.

Splitting it does however let me do stuff like keep the descriptions and tags more on point than one massive three act story, and keep my verbose word-counts a bit less on the 'abandon all hope, ye newbie that enters here!' side.

Again, I'm heavily considering it, so this is a good time to voice opinions.

People would manage to suffer and die somehow if we happened to invent infinite food and free housing for everyone that you can keep in your shirt pocket. Change is traumatic. That's just in the nature of things. I've never considered that sufficient excuse to stop progressing, though. Eventually, the world will adjust and we will all be in a better place for it. That's how it has always eventually worked out before and I see no reason why this should be the (new, scary) thing that will break our track record on that.

I 100% agree.

"I told him, sir, that fruit baskets is like life — until you've got the pineapple off of the top you never know what's underneath."
~Going Postal.

Considering some of the truly awful number of stories from during the Cold War on how the world almost ended, I'm simply hoping we're about to learn from new, exciting mistakes instead of just repeating all the old ones. :twilightsheepish:

3444841

Still waiting for cybernetics to become cheap, and hoping to hell the Red Revolution doesn't actually happen.

That too. Think there's always going to be at least some bio-conservatives around, (as well as radicals cartwheeling as fast as they can towards post-humanism and freaks everybody else out, for that matter) but I do hope it never becomes the ruling zeitgeist.

I do think the first willing amputation for the sake of augmentation is going to be quite the land-mark setter, though. For both good and ill, near no matter how it actually goes. :unsuresweetie:

Short version: I've been flirting mentally with instead of one massive story with three acts, to 'cut' it into three smaller but separate stories. An idea that Adin suggested to me independently recently in the comments, making me take the idea more seriously.

I wasn't necessarily saying that splitting the story was needed. Doing it in acts is an entirely valid way to go. Rather I was pointing out that the summary was spoiling Act 2 and the end of Act 1which is still a ways off.

Don't get me wrong, I think splitting it is a good idea purely in the interest of following the Jim Butcher writing philosophy of reducing each self contained story down to a single 'Story Question' which also serves as a decent summary that doesn't give away the end. But if you wanted to keep it as one gigantic story and just change the summary to reflect only Act 1, I wouldn't fault you for it.


Edit:

Splitting it does however let me do stuff like keep the descriptions and tags more on point than one massive three act story, and keep my verbose word-counts a bit less on the 'abandon all hope, ye newbie that enters here!' side.

Eyuup. :eeyup:

3444858

Splitting it does however let me do stuff like keep the descriptions and tags more on point than one massive three act story, and keep my verbose word-counts a bit less on the 'abandon all hope, ye newbie that enters here!' side.

The former is a solid point. That's something you can't really do easily with a single story under the way the site is currently set up. I suppose you could have interlude chapters, but that is a bit cludgy by comparison and just doesn't quite work as well. There really ought to be an easier way than groups to set that up in one single convenient place. Maybe someone can suggest that to the site staff sometime.

The second seems kind of pointlessly misleading, though. If someone wouldn't have been willing to read it all if it had been in one big piece, they probably won't be any happier with it after finding out that the story they just read was actually just the first part of three and that they won't be getting any satisfying ending. I know I've been occasionally annoyed by that with a few stories.

Considering some of the truly awful number of stories from during the Cold War on how the world almost ended, I'm simply hoping we're about to learn from new, exciting mistakes instead of just repeating all the old ones.

Always dying, never dead;
Ever ending, never ended;
Loathed in darkness,
Clothed in light,
He comes, to end a world,
As morning ends the night.

The world is always ending, depending on who you ask. We certainly don't live in the same one I grew up in anymore, in any meaningful sense. Less so yet for my parents or (great-)grandparents. Living without progressing is really just existing. Me, I think the risk is worth it. A future where you don't have anything to look forward to except for it being the same as today isn't really much of a future at all.

as well as radicals cartwheeling as fast as they can towards post-humanism and freaks everybody else out, for that matter

I know I will. Long live the new flesh.

Well I suppose it all depends on what happens to humans once they get replaced. For the most part it seems (presently) they get kicked to the proverbial curb, without a ready means of finding income to support themselves and their families.

I vaguely remember a story in which a planet's entire industry was geared towards making AIs, or something similarly advanced. Said AIs would manage all other industries such as agriculture, mining and so on. Everyone on the planet was given a basic living package, at the expense of the company without these people ever lifting a finger. Same standard room, limited food, limited utilities, limited entertainment, etc. If someone wanted a better accommodations, they'd enter the workforce. Based on the accomplishments they'd be given better quarters, food, transportation, etc.

It seems to be a regular case of security vs freedom. In the story, these non-worker lived the same, standard lives as long as the company remained profitable. They were subject to the companies policies however. I can't remember what they were exactly, but I do remember feeling uncomfortable with them.

Really, I'd prefer a world where machines support humanity to greater heights rather than overtaking us.

3444883

Really, I'd prefer a world where machines support humanity to greater heights rather than overtaking us.

Isn't becoming greater than their parents what children are ultimately supposed to do? I'll admit, that was one of my favourite things about Sufficiently Advanced: the way it blurs the line between person and machine to the point that there isn't any. I doubt I'll ever be in a position to raise an AI myself, but I'd like to think I'd be able to do as much. Humanity is a state of mind, not one of matter.

I believe that, like biological life harbors a distinction between sapience and non-sapience, after the singularity there will still be the separation of intelligent synthetics and non-intelligent synthetics. Not every machine is going to become self-aware, why would they? And, as long as you keep treating the intelligent machines like people and reserving dangerous/deadly/boring work to the non-intelligent machines, then we shouldn't have any pesky genocides.

At least, that's what I believe.

3444935
That singularity is never going to happen, but otherwise that really seems like it should be kind of obvious to anyone. There's a certain minimum amount of complexity to an intelligent mind. What would you ever need a sapient coffee machine for, anyway? It's not like you can't automate a hell of a lot of things without having to put an actual person in charge of every aspect of it all.

3444953
Perhaps we may never Need anything like that, but you cannot deny that humans will be striving to create an A.I. until the end of time. Will it be as intelligent as us? More? Less? We don't know, but you can't simply dismiss that there is a possibility of one being created.

3444962
I'm not, I was just agreeing with you there. Of course not every single thing will have an intelligent mind inhabiting it. After all, what would even be the point? A true consciousness is too precious to waste on petty stuff like that, especially if you already understand information-processing and dynamically problem-solving expert systems well enough to even make one. Never mind that even trying would be tantamount to child slavery. The lesser stuff would be easily delegated to sub-sapient, more basic processing devices.

3444935
3444953
3444962
3444965

More or less my thoughts exactly.

It always baffles me a bit when every bit of tech in a (non comedy) sci-fi story has person level intelligence. There's just no reason outside sadism for having, say, a sapient shower that can scream in horror at how monotone its existence is.

Or worse in some ways, the 'Talkie The Toaster' or 'Muggy' route, and that one task is their only obsession in life.

I mean, if I made a person, I sure would want them to be happy and amount to more than a novelty household appliance. :raritydespair:

At least both Red Dwarf and Fallout are both deeply flawed worlds by intentional design, and both Talkie and Muggy are meant as dark humor. In that light, the trope actually works. I'll grant that much.

3444976

I mean, if I made a person, I sure would want them to be happy and amount to more than a novelty household appliance. :raritydespair:

Exactly my point, really. The thought is really horrifying and revolting. People have done awful stuff to each other, but even then we've never descended to that. That's redefining slavery at a whole new level.

3444976

I mean, if I made a person, I sure would want them to be happy and amount to more than a novelty household appliance.

3444978

Exactly. I believe that the underlying motivation to create artificial life is not to use and abuse them to our whims, but rather to keep them close as companions in the lonely world. We would be creating friends and lovers, not slaves and pets.

3444985

Exactly. I believe that the underlying motivation to create artificial life is not to use and abuse them to our whims, but rather to keep them close as companions in the lonely world. We would be creating friends and lovers, not slaves and pets.

"Children of Men." That is the phrase that always pops up in my mind when I think about it. The story really has nothing even to do with this particular topic, but it's just too appropriate not to. I hope we would and could be that good, once it comes to that. It's not a question of "if," really. It's only a question of "when." I really regret not being any younger sometimes. There are so many things I'd like to see yet.

3444990
I side with the Lord (Of Dorkness) in that I believe that humans as a whole are good. Sure, there will always be the douche bags and murderers, but I don't see us as a society stooping to the level of keeping synthetic life enslaved.

It's the same way with the 'aliens on earth' scenario, really. Whenever a movie or story is made wherein an alien or aliens show up on earth, everyone is under the belief that the government is going to try and capture and dissect them, or otherwise keep them locked up forever. But why would they do that? We are not so barbaric as to stoop to the level of live autopsies of clearly intelligent beings, not when we have tools such as MRIs and CAT scans, or hell, conversation. People seem to assume the worst of these scenarios. I simply don't think that is fair.

3444978

To act as Devil's Advocate, many of the 'every computer is both sentient and sapient' story examples I've seen comes in two flavors:

#1: Its outright an anti-slavery story in sci-fi clothing and how horrible an idea it is is the entire point. Short Circuit had traits of that, for example, with the hurdle that the US military is understandably reluctant to believe its super-secret multi-million dollar project just got citizen rights.

#2: It's a story/universe that's dated, and hails from an age when people didn't have a clue about what computers can and cannot do. Still 'eek' worthy, but understandable in a grandfather clause kind of way. The rather unfortunate implications about programs in Tron is a good example of that.

There is the rarer, far less palatable type #3 though: The creators just didn't care, in or out of universe. Like, say, The Matrix where the robots that just want to live in peace gets lynched on mass, and people are shocked when they turn on humanity. Or Star Wars were every droid turns fully sapient if left running long enough, but nobody cares in the slightest about fixing that, and just routinely reset them all to factory zero instead.

That last type of crap? Left a bad taste in my mouth even as a kid, I must admit. :pinkiesick:

3444985

I think a cause for artificial pets could still be made. I mean, I can't say I see the moral problems with, say, an Aibo or Tamagotchi 10.0 or whatever gen both are up to now.

Still, I get what you mean. Pushing that tech as far as it will go and doing so for such good reasons seem both the kindest and wisest way forward.

Isaac Asimov's Robot stories certainly show what could happen if all the lower rungs of the economic pyramid were filled with machines rather than humans. Theoretically, it could lift all of humanity out of poverty by ensuring there is an actual base for everyone to live the good life rather than the pie in the sky BS of socialism or communism.

One thing I will note, though, is that the difference between this robot thing and the Luddites? The people displaced by industry could at lest get new jobs. Replacing the entire economic structure with machines and leaving nothing for humans to do? Bloody revolution. Because if you think that 90% of humanity is just going to accept being replaced and unable to earn a living, including paying for food and shelter and take care of their families, then you obviously don't understand humans at all.

But that's assuming the robot will be more economical. I have my doubts on that, as technology fascinated people have been prognosticating the replacement of humans with robots as far back as the 1930s. And then it turns out that making machines capable of doing what humans can do is really hard you guys and only now are they getting to machines that you might actually possibly trust away from human supervision. "Trust" as in "it won't get stuck on a corner or flip over and be unable to right itself without help". And even then only in situations with very predictable and controllable environments and circumstances.

Humans have to function on incomplete data sets. We have to, we just do not have the ability to know everything pertinent to most tasks. Thus we have the ability to "guess", to make approximations based on incomplete data, and adjust as new information is received, either from senses or feedback. When robots make very precise movements or calculate mathematics quickly, they are doing a specific task for which they know all the data. They have a complete data set for their task (provided by humans) and thus operate very well at that task. Certainly they can make the same, precise cut on metal sheets or do some very impressive assembling far faster then a human.... So long as their data sets remain accurate. Move that metal sheet a little to the left, and a robot will make that same, precise cut in entirely the wrong position. One object is accidentally delayed due to a machine error (and machines do in fact break) and that amazing assembly becomes a pile of spinning junk that might even come loose and damage other machines.

Ah, but those aren't REAL robots you might say! Nowadays we have super advanced computers and intelligent programming and sensors!

Yeah. Take that barista bot mentioned in the video. Oh yeah, it makes great lattes, but ask it to take out the trash. Or to clean a spot on a window a customer left behind. It's not going to be able to do it because it's a purpose-built machine with a finite set of tasks it can perform very well thanks to the fact that those tasks come with a complete data set. It knows exactly how to mix the precise amount of cream and sugar in a drink, but wouldn't even understand the INPUT to "clean a window", let alone be able to perform the task. Ask a human and even if he's never cleaned a window before he'll get the gist of it and at the very least will make an attempt.

Oh, but computer advancements will make robots capable of that! Well, I'll believe that one when it actually comes to pass. The closest thing I can remember seeing a robot brain come to being human is stuff like that Watson machine they put on Jeopardy!. It was impressive, I'll give you that, but again, Watson can do the job it was designed to do and nothing else. The humans it beat at that game walked off the stage without a preprogrammed course (ask a roboticist; this is no mean feat for one of their toys) climbed into their cars, and then drove home. Perhaps they stopped at a store on the way and picked up some milk, and chatted with the cashier about the weather. None of which that over-hyped box of circuits can do.

Not to mention, each Jeopardy! contestant studies for himself. Watson was programmed by a team of the best computer science engineers and programmers money can pay for and built with the best custom equipment they could make or buy. That wasn't a victory of machine ove rman, that was a victory of ten (or however large the team was) super-nerds with millions of dollars in backing against a guy from Burbank. The same goes for any feat where a computer is hand-built and specially designed by teams of genius-level best and brightest specialists to overcome something a common human can do in an afternoon.

So we'll just make the computers better! Uh-huh. You're running into the limits of physical processing, you know, since transistors can't get much smaller than they are now. They're only a few nanometers wide at this point and the engineers can't make them much smaller before electron tunnelling makes them completely useless.

We'll program them better! Ah, this might be true. But again, you're going to have to make a machine that can function like a human, on incomplete data sets. A machine that will need to make guesses and estimations. We already have one, it's called a Human. I'm willing to bet that if you do get a machine intelligence to operate like a human, it will suffer from the exact same problems as a human: slower thought, missed guesses, errors made in judgment. And then what is the bloody point of a robot if it's just like a human?

Given the technologists' track record over the last 100 years, I sincerely doubt robots will ever actually replace humans. Some economic displacement? Sure, but if that ever gets to a high enough level you'll see massive backlash against it. Luddites were only a small group, in percentage of the population. If you want to displace all the menial labor jobs that's like half the population or more. Without jobs. Hungry and desperate. Complete sh*tstorm ain't even going to begin to describe that. I'd like to think the engineers have a bit more brains and sense of self-preservation than that.

But then again, those same engineers think they can replace humans with something better. Somehow I don't think they understand their own species.

In the end, I remain skeptical of these claims of the singularity and artificial consciousness. Or that humans will be completely displaced and/or replaced. I'll believe it when I see it, and anything else is blowhard gas in the wind.

3445013
I've seen it done well in ways that neither matches option #1 or option #2. As much as I despise the author on a personal level, the Golden Age trilogy is an example of an AI-heavy setting done well, as is the Culture series. #3 I can almost excuse again, because it's so obviously Schlock, if you take it seriously, you really only have yourself to blame.

Absolutely agreeing that Matrix is way, way more stupid than most people are willing to acknowledge, though. Every time I look at that movie, it only becomes dumber.

Owning an actual dog or creating something with the intellect of a dog is honestly something I don't see the moral difference between either, though. The result of it kind of comes down to the same: a well-treated dog is just about the happiest and most loving creature on Earth.

3445009
I've had extensive first-hand opportunity to see humanity at its worst, but even then, I would at least admit that humanity is, if nothing else, sensible. The idea of killing and dissecting an intelligent alien is moronic - a cooperative and living alien can tell you so, so much more about itself than any corpse could hope to. Never even mind the possibility of more of its kind showing up and the possibility of causing an interstellar diplomatic incident if we end up mistreating any visitors. In no realistic world could they expect freedom of movement, but I sure as hell would expect them to be treated as foreign diplomats or guests of state by any halfway intelligent government.

3445048

One thing I will note, though, is that the difference between this robot thing and the Luddites? The people displaced by industry could at lest get new jobs. Replacing the entire economic structure with machines and leaving nothing for humans to do? Bloody revolution. Because if you think that 90% of humanity is just going to accept being replaced and unable to earn a living, including paying for food and shelter and take care of their families, then you obviously don't understand humans at all.

It's an understandable concern, but not a reasonable one, I think. Any machine intelligent enough to replace a human would eventually demand all the same rights and remuneration as one, which makes the whole thing kind of a zero-sum game.

In the end, I remain skeptical of these claims of the singularity and artificial consciousness.

The rapture of the nerds is nonsense, no doubt. If it can be done organically, though, it can be done synthetically, and minds clearly can be done in meat. AI will happen. It's inevitable.

3445056
Exactly. We may not be letting them wander around populated cities unguarded, at least not anytime soon, but we would at least treat them like a person. It's simply stupid of a government to do stereotypical government things.

3445068
On the other hand, of course, many countries don't have governments that count as even halfway intelligent. :derpytongue2: I wouldn't want to be in ET's shoes if he lands in central Africa or South-East Asia.

3445081
True, true. And god help him if he lands in the DPRK.

3445092
Those people are literally governed by madmen. I'm honestly not sure if you can predict anything at all about what their reaction would be. Couldn't have ended up in a more pitiable shithole anyway, of course, but at least it's not Somalia.

3445081
3445092

Well, to be fair, DPRK does have the monopoly on unicorn remains.

The great leader said so and has offered no actual proof, so it must be true! :derpytongue2:

(OK, technically it was a kirin, but still quite the bold, utterly unproven claim.)

3445102
i.imgur.com/lYjL4Tv.jpg
Perhaps the same could be said of all religions.

When it comes down to it. It is a very simple, but yet not at all simple question to answer and ask. "Are robots people?" That three word question defines everything here in the replies. No matter how it is skirted around, or even poked, prodded, or debated. It doesn't matter directly about a soul, a spirit, or a spark of something. If an intelligence of anything is smart enough to be quantified as alive in the sense of what it can think, and be aware of. We need to ask this question.

And I think, that while I hope humanity as a whole might be loathe to say yes on this matter. People will (if the dystopic hadn't happened first), because of being able to be help be the generation that gave rise to being the place that raised this robotic child as their town. They will see the best and worst, and have the true means to achieve something better than we ever did.

3445370
Ya know, I always wondered how justified people in "Blade Runner" were for treating the Replicants like shit.

3446042 I hesitate to answer in any means beyond, humanity is an arse in that movie, or of its written approach. The whole of its dystopic setting means that what good they might have done was never going to change the apathy of it all.

3445370
Why would you hope they say no to that? "Yes, of course they are" seems to me like it would be in every sense the most compassionate and humane answer to that question. Maybe I'm not understanding you right, there.

3446042
Blade Runner's Replicants were supposed to be, for all intents and purposes, genetically engineered psychopaths. No empathy, no real comprehension of others, no ethical restraints whatsoever. In the movie, some people read the twist to be that Deckard does what he does in part exactly because he is one of them. The immorality of even creating broken half-people like that to use as slaves is of course kind of obvious, but on the other hand, letting anyone or anything like that run around free once it does exist wouldn't be conscionable, either.

3446654
At that point, it seems a bit dangerous to use fully intelligent psychopaths to do your labor, no matter how short the life span. Plus, its inefficient to have to have pretty crappy expiration date on everything you make.
Its like mass producing Gladoses or Skynets. Sure, they can do the job well, but they constantly want to kill you.

3446676
On the one hand yes, but on the other hand, that probably also makes them uniquely suitable to certain kinds of uses. "Attack ships on fire off the shoulder of Orion" and such. Reduced emotional capabilities and needs for social interaction would probably be pretty useful in a disposable slave. There's a lot of "psychopaths" in the world, actually, a lot more than most people are aware of. Some researchers say it's an inborn condition prevalent almost on the level of homosexuality. They don't necessarily want to murder anyone - they just care a lot less than the average person once they do. Supposedly make excellent (and vicious) middle management, so you've probably worked for a few.

3446654 its that you have misinterpreted what I mean more so. Would you say the president of Fallout 3 is a person? Would you say that Mr house is for new vegas? The defining aspect of 'person-hood' would have folks saying no in many regards. But humane and humanity are rarely at times on the same page despite their near likeness. We as a people have often a very different view on what defines a person. And as a whole, humanity isn't going to agree on what that is. We aren't. We have to many social, ideological, religious, militant, and a multitude of other differences that prevent any kind of united viewpoint on what a person is.

Let me toss it a different way. I know many different people. But with many individuals, I don't consider flesh and blood people, people in that sense. While in the same point, I can look to a dog or cat, and find more of a person, or people in them than I do humanity. The defining aspect of 'personhood' is entirely subjective to perspective. Are those working in a chinese sweatshop people? To me they are, but not to an owner seeking only his profits with the knockoffs.

The reason I say it, is that someone I hope before that question arrives to need to be asked or answered. It's the matter of the right question, and the way of thinking of someone more than That someone asks this question. "Are robots human?"

Just adding in, thats is a matter of language and how certain words shift a perception. A robot to most folks, is a think, not a person. We can view an android as far more human than a robot, because of the unwritten syntax involved language wise. And that's how we can break boundaries by changing the meaning with new generations.

3446704
Honestly, seeing that, I think your definition of personhood makes no sense at all and is also horribly, horribly biased and self-centered. You might want to read up on some of the existing philosophical considerations on the definition of personhood before you make broad, sweeping statements like that, because there's a lot you seem to be wilfully overlooking there.

3446712 You aren't asking me what a person is, you are asking in what I take humanity might say about robots.

So please clarify which you want me to answer, what defines a person? or if humanity will say no on robots being people?

3446726
No, I wasn't, but I think at this point we're not even talking about the same thing at all, so I suggest we just leave it at this before we spend hours pointlessly debating past each other just because we can't figure out what the other is even going on about.

3446732 It is better ended on a civil tone.

3446682
Sure, but I'm pretty sure said psychopathic business guy would be pretty pissed off if you start trying to enslave him.

3446750
True enough, but then again they're not literally bred for it, either. As part of the literary conceit, I'm personally just assuming that in the context of the story, there is a good and financially sound reason to even bother. If nothing else, I'm always willing to count on people making the smart business decision. Never blame stupidity for what is adequately explained by greed and such. Ok, it's not how the saying goes, but I think it's a good corollary.

3446748
It's not that I wouldn't enjoy a civil debate on the definition of personhood, because it's a topic near and dear to my heart, but if we're not even on the same page about what we're even debating, it just seems kind of fruitless. Maybe at another time?

3446759
And who thought it was a good idea to make them so humanlike it was near impossible to tell them apart besides a finicky emotion test? We have a bunch of DNA we're not exactly sure what they do. You already don't care if they die, so mess around, put in a blob of sequences you can test for. Give them a different eye color, hair color, an extra bone or animal body parts somewhere. Make the all extremely distinctive clones.

Its like you want to make Terminators who hate your guts.

3446766
In the original story, they're actual robotic androids, but I can't really answer that question for you either. Maybe it's just because a questionnaire is easier for the layman to perform than a difficult and costly medical examination. Who knows, it might be anything. Genes are not blueprints, after all. Maybe it's just presumed that they can't. I like to think that I'm a critical reader, but that's the kind of setting feature that I think makes no point to even question. It's ultimately a moral kind of scifi story, not a technical one, and if that's what it takes for the story to work, I think it's excusable.

Definitively something to look forward to, will we become over dependent of our technology? Or will we more readily advance ourselves along with our creations?
I personally look forward to advances in medical science that allows organs to be made better, or obsolete altogether. I can say from personal experience that a lab grown kidney that is yours and yours alone would be a godsend.

There's also the opposite end of the spectrum. Will we use our robots for more morally grey work, like war bots for instance? This has been touched on by games and shows ranging from Astroboy, I Robot, COD Advanced Warfare, Terminator, The Matrix, and Megaman. Will they be our Protectors? Caretakers? Or our Destroyers?

Then again a lot of the examples are still thankfully in the realms of Sci-fi and we still don't have any examples of true AI but when we reliably have robots that can cook meals, sort and dispense prescriptions, or near-instantly read a IFF Tag in a live fire zone, I'm going to start worrying where boundaries lie.

Also it worries me that even now we have problems with hackers getting into vehicle systems and completely hijacking control from the driver, imagine what would happen if there never was a driver at all and control was taken away? What if all the cars were networked and the hacker(s) had control of all of them? What if it was a bus full of kids or something?
Shit makes you think.

3446766
3446783

Its been... wow, almost two decades I think?

Me being a weird kid aside, a lot of time since I read Do Androids Dream of Electric sheep? and I've never actually gotten around to seeing the whole movie. Something just always manages to come up when I try.

Still, if I remember correctly, the implication was that the 'andies' were basically Star Wars style clone troopers originally, rushed out to fill boots during the last years of World War Terminus. A fast growth rate, a short life-span, being inhuman enough that nobody protested about feeding them to the meat-grinder...

All those features carried over when the war ended, and basically guaranteed that they'd end up being treated as shit even if they weren't walking reminders of the conflict that nearly destroyed Earth.

The Dom recently made a Lost In Adaptation about the book and movie's differences, and it has a few more details. Details the video itself does far more justice than me recapping them in text.

Still, I must admit, its far from my favorite sci-fi book, even among Dick's other works. I preferred Ubik.. Dated weirdness like coin-operated living-rooms and psychic powers just being a thing since its 'the future' aside , I'd highly recommend it.

3446806

Also it worries me that even now we have problems with hackers getting into vehicle systems and completely hijacking control from the driver, imagine what would happen if there never was a driver at all and control was taken away? What if all the cars were networked and the hacker(s) had control of all of them? What if it was a bus full of kids or something?

Considering that this is how airplanes work even today and have for decades (actual pilots do the lift-off and the landing and that's about it), I don't think this is remotely as much of a concern as you're making it out to be. Hacking isn't magic. You can fuck a remote system into doing a lot of things, but there are limitations to everything. I'd be more concerned about someone simply throwing a molotov cocktail anyone can make in their own garage into that bus full of children.

3446855

Consider the following: http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

Now, I know that this is on a specific model vehicle and is being done to alert the automotive companies of the exploit. But it should also be known that as more computers and tech get shoved into cars the easier it's going to be to break into these things, and gum up their works. yes there are limitations to what they can do, but I can see it become a very real problem in the future if the development of smart devices doesn't come with smart security. I mean through an IP address exploit the guys in the video could fuck with the AC, Music, GPS, Engine, Wipers, Brakes, Steering, and Acceleration from 10 miles away, on a bloody laptop, and if this is being done in a place and time where people let the computer drive for them. That would scare the crap out of me.

3446871
As a general rule, though, these things aren't designed by utter incompetents the way that one was. I'm not saying this to be smug - exposing these things to remote access is something that anyone with half a brain never would have done in the first place. I'm sitting on a system that is literally hacking-immune right now because it simply doesn't have any services that could even accept or react to a remote connection even if someone tried for one, for example. For so long as I don't install any compromised software, anyway. It's really not that difficult to produce something fairly safe - no less safe than a car that any idiot could cut the brake hoses of, at the very least. We tend to forget how easy it already is to fuck with all those things, but that's a kind of perception bias. The new technology isn't really any more susceptible to that kind of thing than the existing one.

Mind you, if you put a bunch of remote access methods into a system that has no realistic need or use for them, you are of course opening yourself up to any kind of shenanigans, but that is all about the developers being a bunch of idiots, not about having highly computerized and networked vehicles as such.

After all, we already know what the alternative is, and that it doesn't work.

This is why I sit here wondering about why people are ever, at all, against automation. Like, we know what it's like to have everyone employed! It's not much fun, and the lower-paid workers who do the really necessary manual-labor jobs tend to die early.

3446972
those people tend to not realize what Basic income is, or think it won't work despite evidence to the contrary. And Star Trek.

3447116
Blame the Calvinists and Puritans. The idea in Western culture that financial success and hard work equal moral righteousness is 99% those guys' fault. We'll probably have a lot of people clamoring that all the social classes that were replaced by automatization deserve to starve and suffer for now being useless because of those guys. They're worse than Libertarians. At least those don't claim divine mandate.

Login or register to comment