Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Reaching out to people with the problems of friendly AI

4 Post author: Val 16 May 2017 07:30PM

There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.

After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.

What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.

I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.

The sheer and mind-boggling stupidity of this makes me want to scream.

We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.

There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:

A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.

We don't want our future AI researches to start working with such a mentality.

 

What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.



Comments (14)

Comment author: whpearson 17 May 2017 12:35:30PM 4 points [-]

To play devil's advocate is increasing everyone's appreciation of the risk of AI a good idea?

A risky AI implies believing that the AI is powerful. This potential impact of AI is currently under appreciated. We don't have large governmental teams working on it hoovering up all the talent.

Spreading the news of the dangerousness of AI might have the unintended consequence of starting the arms race.

This seems like a crucial consideration.

Comment author: siIver 17 May 2017 08:50:11PM 0 points [-]

Pretty sure it is. You have two factors, increasing the awareness of AI risk and of AI specifically. The first is good, the second may be bad but since the set of people caring about AI generally is so much larger, the second is also much less important.

Comment author: whpearson 18 May 2017 06:56:06AM *  1 point [-]

There are roughly 3 actions:

1) Tell no one and work in secret

2) Tell people that are close to working on AGI

3) Tell everyone

Telling everyone has some benefits in maybe getting people that are close to working on AGI that you wouldn't get otherwise and maybe making it more convincing. It might be most efficient as well.

While lots of people care about AI I think establishment is probably still a bit jaded from the hype before the AI winters. I think the number of people who think about artificial general intelligence is a small subset of the number of of people involved in weak AI.

So I think I am less sure than you and I'm going to think about what the second option might look like.

Comment author: madhatter 17 May 2017 01:00:10PM *  0 points [-]

Wow, I hadn't thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won't start an arms race until we've figured out how to align them. Maybe we want the issue to remain largely a laughingstock.

Comment author: username2 20 May 2017 12:23:46PM *  2 points [-]

Ok so I'm in the target audience for this. I'm an AI researcher that doesn't take AI risk seriously and doesn't understand the obsession this site has with AI x-risk. But the thing is I've read all the arguments here and I find them unconvincing. They demonstrate a lack of rigor and a naïve under appreciation of the difficulty of making anything work in production at all, much less out smart the human race.

If you want AI people to take you seriously, don't just throw more verbiage at them. There is enough of that already. Show them working code. Not friendly AI code -- they don't give a damn about that -- but an actual evil AI that could conceivably have been created by accident and actually have cataclysmic consequences. Because from where I sit that is a unicorn, and I stopped believing in unicorns a long time ago.

Comment author: entirelyuseless 20 May 2017 03:44:27PM 1 point [-]

People are likely to take the statement that you are an AI researcher less seriously given that you are commenting from the username2 account. Anyone could have said that, and likely did.

But in any case, no one who has code for an evil AI is going to be showing that to anyone, because convincing people that an evil AI is possible is far less important than preventing people from having access to that code.

Comment author: hg00 23 May 2017 06:35:23AM *  0 points [-]

I'm sure the first pocket calculator was quite difficult to make work "in production", but nonetheless once created, it vastly outperformed humans in arithmetic tasks. Are you willing to bet our future on the idea that AI development won't have similar discontinuities?

Also, did you read Superintelligence?

Comment author: username2 23 May 2017 02:52:37PM *  0 points [-]

It was a long time from the abacus until the electronic pocket calculator. Even for programmable machines Babbage and Lovelace predate implementation by the better part of a century. You can prove a point in a toy environment long before the complexity of supported environments reaches that of the real world.

Yes, I read and walked away unconvinced from the same old tired, hand-wavey arguments of Superintelligence. All my criticisms above apply as much to Bostrom as the LW AI x-risk community that gave birth or at least a base platform to him.

Comment author: hg00 24 May 2017 05:47:49AM *  0 points [-]

You describe the arguments of AI safety advocates as being handwavey and lacking rigor. Do you believe you have arguments for why AI safety should not be a concern that are more rigorous? If not, do you think there's a reason why we should privilege your position?

Most of the arguments I've heard from you are arguments that AI is going to progress slowly. I haven't heard arguments from AI safety advocates that AI will progress quickly, so I'm not sure there is a disagreement. I've heard arguments that AI may progress quickly, but a few anecdotes about instances of slow progress strike me as a pretty handwavey/non-rigorous response. I could just as easily provide anecdotes of unexpectedly quick progress (e.g. AIs able to beat humans at Go arrived ~10 years ahead of schedule). Note that the claim you are going for is a substantially stronger one than the one I hear from AI safety folks: you're saying that we can be confident that things will play out in one particular way, and AI safety people say that we should be prepared for the possibility that things play out in a variety of different ways.

FWIW, I'm pretty sure Bostrom's thinking on AI predates Less Wrong by quite a bit.

Comment author: fubarobfusco 17 May 2017 03:30:56AM *  2 points [-]

We should increase awareness of old fairy tales with a jinn who misinterprets wishes.

The most popular UFAI story I'm aware of is "The Sorcerer's Apprentice".

Sticking with European folktales that were made into classic Disney cartoons, maybe the analogy to be made is "AI isn't Pinocchio. It's Mickey's enchanted brooms. It doesn't want to be a Real Boy; it just wants to carry water. The danger isn't that it will grow up to be a naughty boy if it doesn't listen to its conscience. It's that it cannot care about anything other than carrying water; including whether or not it's flooding your home."

Thing is, much of the popular audience doesn't really know what code is. They've never written a bug and had a program do something unintended ... because they've never written any code at all. They've certainly never written a virus or worm, or even a script that accidentally overwrites their files with zeroes. They may have issued a bad order to a computer ("Oops, I shouldn't have sent that email!") but they've never composed and run a non-obviously bad set of instructions.

So, aside from folklore, better CS education may be part of the story here.

Comment author: gwillen 16 May 2017 09:48:18PM 2 points [-]

We don't have the funding to make a movie which becomes a cult classic.

Maybe? Surely we don't have to do the whole thing ourselves, right -- AI movies are hip now, probably we don't need to fund a whole movie ourselves. Could we promote "creation of fiction that sends a useful message" as an Effective Career? :-)

Comment author: username2 20 May 2017 03:43:15PM 0 points [-]

Not a reply to you per se, but further commentary on the quoted text: isn't that what the move Transcendence starring Johnny Depp and Rebecca Hall is? What would yet another movie provide that the first one did not?

Comment author: tristanm 16 May 2017 08:48:51PM *  1 point [-]

The x-risk issues that have been successfully integrated into public awareness, like the threat of nuclear war, had extensive and prolonged PR campaigns, support from a huge number of well-known scientists and philosophers, and had the benefit of the fact that there was plenty of recorded evidence of nuclear explosions and the destruction of Hiroshima/Nagasaki. There are few things that can hit harder emotionally than seeing innocent civilians and children suffering due to radiation poisoning. That, and the Cold War was a continuous aspect of many people's lives for decades.

With AI, it seems like it would have to be pretty advanced before it would be powerful enough to affect enough people's lives in equivalently dramatic ways. I don't think we're quite there yet. However, the good news is that many of the top scientists in the field are now taking AI risk more seriously, which seems to have coincided with fairly dramatic improvements in AI performance. My guess is that this will continue as more breakthroughs are made (and I am fairly confident that we're still in the "low hanging fruit" stage of AI research). A couple more "AlphaGo"-level breakthroughs might be enough to permanently change the mainstream thought on the issue. Surprisingly, there still seems to be a lot of people who say "AI will never be able to do X", or "AGI is still hundreds or thousands of years off", and I can't say for sure what exactly would convince these people otherwise, but I'm sure there's some task out there that would really surprise them if they saw an AI do it.

Comment author: siIver 16 May 2017 08:08:48PM *  1 point [-]

I whole-heartedly agree with you, but I don't have anything better than "tell everyone you know about it." On that topic, what do you think is the best link to send to people? I use this, but it's not ideal.