• Member Since 5th Jun, 2015
  • online

Shrink Laureate


“Trixie hates to interrupt a good monologue,” said Trixie, interrupting a good monologue, “but maybe we should continue it somewhere not on fire?”

More Blog Posts104

  • 71 weeks
    D&D: why the new OGL is really bad

    There's some controversy right now about the new OGL (Open Gaming License) published by Wizards of the Coast, the company behind Dungeons & Dragons.

    I don't talk about D&D here a lot, since I know it's not relevant to Fimfiction, but I think it's worth getting this news out. Also they're owned by Hasbro, the same company that own MLP, so it's worth knowing what they're up to.

    Read More

    8 comments · 386 views
  • 102 weeks
    Story Boost

    I helped edit a story that's really good. You should check it out:

    TShow and Steal
    In the dirty slums of Canterlot, Sunset and Trixie face the harsh life in the streets together. Always together.
    The Sleepless Beholder · 10k words  ·  40  3 · 711 views
    1 comments · 256 views
  • 104 weeks
    Gen 5 Bingo

    With 5 days to go until the next special, remember that you can still get a bingo card here:

    Here's my own Bingo card, only partly filled in after A New Generation:

    Read More

    3 comments · 287 views
  • 105 weeks
    Another reading

    I got another audio reading! This time by Rainbow Infinity Readings. Check it out:

    2 comments · 192 views
  • 114 weeks
    Generation 5 Bingo Writing Contest Results

    Generation 5 Bingo Writing Contest Results!

    We had an impressive turnout for the Generation 5 Bingo Writing Contest, and it took us a long time to pick the winners.

    Read More

    17 comments · 659 views
Feb
7th
2020

Artificial intelligence · 12:17pm Feb 7th, 2020

Some thoughts on modern technology...

You know the classic mad scientist trope, as seen in Frankenstein, Jurassic Park, The Fly, etc, where some scientists create something they don't understand? Where they've had the inspiration to achieve something amazing, but they don't really control it or know how it works, and it inevitably breaks free and confounds/betrays/eats them?

That's where we are with artificial intelligence research right now. Researchers have come up with techniques - like deep convolution and generative adversarial networks - that can achieve spectacular things, but they don't really understand how they work. If you read the interviews, people talk with pride about the algorithms coming up with solutions that no human would have thought of, and that the researchers can't adequately explain.

The trouble is, these algorithms can't explain themselves either. They don't actually understand what it is they do the way a human might, they don't reflect on them, so there's no way of safely extracting their wisdom. You just have to trust the black box.

The IT industry has spent decades working its way up from the wild west of anything-goes code to being vaguely like an engineering discipline. Tools like debuggers, profilers, fuzzers and static analysers; techniques like test-drive development and agile design; and modern programming languages like Rust and Go, are all about making code that's more reliable, predictable, knowable, explainable, accountable. They're not perfect, and us developers aren't perfect, but we're getting better. Humans are, quite rightly, held to a standard.

If my boss asked me how a program I'd written worked, and my response was a blithe shrug and say, "It just does," then I'd be told to go away and do better.

But the AI techniques we have get away with doing exactly that. They achieve staggeringly smart things, but they're completely unaccountable.

Now, I'm not saying the things they do aren't clever, or that we shouldn't keep pursuing this line of research - genies don't go back in bottles. But the results need to be seen in context, in order to understand what they are, and what they aren't. This becomes ever more important as our world comes to rely on working code for its survival.

We're starting to roll out AI-based products in the real world now, not just as toys and tools but for really important things like law enforcement. And perhaps the most dangerous aspect of this is the perception of certainty, when no such certainty exists. When an AI tells you with what it says is 98% certainty that the person in front of you is a dangerous criminal, you're going to believe it, right? Are cops going to stop to debate the philosophical ramifications of statistical confidence intervals, or are they going to pull a gun? We've already seen that algorithms can be racially biased, whether inadvertently or as an extrapolation of biases that exist in society.

So whenever you hear about the authorities using face recognition, or doctors using clever AIs to help diagnose your illness, or aeroplanes using AI in their autopilot, ask who's holding the reins? How are they verifying the results? Otherwise you may end up getting eaten by a dinosaur.

Comments ( 10 )

I got a call yesterday from an AI telemarketer. The voice wasn't quite as good as the text-to-speech feature on my kindle, and had a bit of that Hawking accent, so I realized what it was nearly instantly. But the statements and responses were very natural. When I finally asked it if it was a computer or a real person, it got upset. "Look, do you want this offer or not?" it snapped at me.

I told it that it was putting a lot of poor people in Mumbai out of work, wished it good luck on the Turing Test, and hung up.

The thing is... if the voice had been better, I'm not sure I would have been able to recognize it for what it was.

So... it's not just "judgmental" AI that's the problem. When a neural network learns the most persuasive way to sell whatever it was that Robo-Frank was trying to push, how many people are going to be able to resist buying it? When an AI can learn the "rules" of persuasion as easily as one leans the rules of go or chess, will the political party that employs it be unstoppable?

And, as you said, the AIs won't examine their goals or motivations, they'll just use the most optimal strategy to achieve their assigned goals without regard to the ultimate consequences. Rather than the classic Robot Uprising, perhaps in the future we will be persuaded to go to war against the human minions of rival AIs.

5198956
That is pretty scary. I assume you're certain it wasn't just a very boring-sounding human.

I'm not optimistic enough to imagine these AIs will go rogue on their own, like CelestAI or SkyNet, because as well as lacking reflection they also lack other key parts of human nature, like desire or ambition. For the foreseeable future, they'll be tools of humans. That won't be any comfort when they act like psychopaths.

Corporations already show how easy it is for people's morals to be overridden. Like a cult, revolutionary movement or isolated community, corporations are big enough that they subsume the social context of the people within them. On their own the majority of people try to be reasonably good, more or less; but as part of an organisation they can often do things that are utterly amoral. A real human evicts a poor man from his home. A real human cuts off a family's water supply. A real human fires on a peaceful protester. A real human presses the button to fire a missile. Most of those individual people aren't evil, but they're in a situation where they don't think what they're doing is wrong.

The same will be true of the AIs working on behalf of these organisations; but even more so, because the AIs didn't have any moral compass to start with. People will wind them up and set them going, without sufficient oversight of what they end up doing.

I am sure it will be "Expert Systems" that cause the most harm rather than General AIs, even if they aren't designed by rapacious corporations. Most people I know have been caught in bureaucratic Catch 22s administered by (supposedly rational) human beings. One of the ways I've found to get out of such traps is to ask the person I'm dealing with what they would do in my situation. Engage a little empathy, and they suddenly find a way out of the loop... or cheat. But with an AI enforcing contradictory rules, there is no appeal.

Unfortunately, I don't see any caution or wisdom being employed in the development or use of these systems, and I think we're only going to learn of the serious flaws in hindsight.

5199105
I never did read the Dune prequels, but I think they cover exactly this area: a technologically advanced civilisation deciding - violently - to turn its back on the idea of AI.

5199140
Interesting. I haven't read them, either, but now might be a good time.

5199140

a technologically advanced civilisation deciding - violently - to turn its back on the idea of AI.

- well, it seems today as long as it gives hope of generating monetary profit ..it will be done, overdone, uberdone .....

5198956

When an AI can learn the “rules” of persuasion as easily as one leans the rules of go or chess, will the political party that employs it be unstoppable?

That assumes that universally effective rules of persuasion exist, which, I’m pretty sure, is not true. Humans are complex analog systems just like AIs themselves, living in wildly different cultures, adapting to stimuli be they positive or negative, desensitizing, changing, and it will be a constant tug of war.

There are no silver bullets.

Although it’s a given that eventually you will have to have a screening AI to determine whether you want to accept a call from a previously unknown correspondent or not, but well, I want one regardless.

5199353
I really didn't make that assumption! It's just that, with such a complex topic, it is hard to be both concise and accurate. :twilightsmile:

All cultures share underlying biological similarities as a starting point, and for a AI to map the regional differences and be able to detect individual variances as a "conversation" progresses seems perfectly doable. A "silver bullet" is a simple, easy solution. This is an incredible complex solution... but one that can build itself through machine learning.

As for the screening, that's being done right now for advertising purposes. The initial contact doesn't have to be via phone, and it would make sense for AIs to gather information on their targets prior to contact to determine which method would have the greatest chance of success. Posing as potential friends or romantic partners would not be beyond the realm of possibility. Studies have shown that even when people know the agent they are interacting with is artificial, they will often get emotionally attached.

Uber’s white paper on urban air taxis was almost hilariously overconfident in its timetable predictions. Pilotless passenger carrying quadrotors in less than five years? It’s taken over a year for Boeing to get a software patch approved.

5199694
each time I hear about flying cars I recall Paleofuture blog .....

Login or register to comment