Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

siduri comments on The Logical Fallacy of Generalization from Fictional Evidence - Less Wrong

39 Post author: Eliezer_Yudkowsky 16 October 2007 03:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

Sort By: Old

You are viewing a single comment's thread.

Comment author: [deleted] 24 February 2011 12:06:40AM 4 points [-]

When I try to introduce the subject of advanced AI, what's the first thing I hear, more than half the time?

"Oh, you mean like the Terminator movies / the Matrix / Asimov's robots!"

Doesn't Asimov's Laws provide a convenient entry into the topic of uFAI? I mean, sometime after I actually read the Asimov stories, but well before I discovered this community or the topic of uFAI, it occurred to me in a wave of chills how horrific the "I, Robot" world would actually be if those laws were literally implemented in real-life AI. "A robot may not injure a human being or, through inaction, allow a human being to come to harm"? But we do things all the time that may bring us harm--from sexual activity (STDs!) to eating ice cream (heart disease!) to rock-climbing or playing competitive sports... If the robots were programmed in such a way that they could not "through inaction, allow a human being to come to harm" then they'd pretty much have to lock us all up in padded cells, to prevent us taking any action that might bring us harm. Luckily they'd only have to do it for one generation because obviously pregnancy and childbirth would never be allowed, it'd be insane to allow human women to take on such completely preventable risks...

So then when I found you lot talking about uFAI, my reaction was just nodnod rather than "but that's crazy talk!"

Comment author: Sniffnoy 24 February 2011 12:11:42AM 1 point [-]

AKA the premise of "With Folded Hands". :)

Comment author: [deleted] 24 February 2011 12:31:36AM 0 points [-]

I haven't read that but yes, it sounds like exactly the same premise.

Comment author: ArisKatsaris 24 February 2011 12:44:10AM 0 points [-]

Violating people's freedom would probably also count as harm, emotional harm if nothing else. Which is even more troublesome as we wouldn't even be allowed to be emotionally distressed -- they'd just fill us with happy juice so that we can live happily ever after. The superhappies in robotic form. :-)

Comment author: TobyBartels 24 February 2011 01:52:43AM 0 points [-]

"I, Robot"

It's interesting that the I, Robot movie did a better job of dealing with this than anything that Asimov wrote.

Comment author: [deleted] 24 February 2011 02:03:53AM 0 points [-]

Did it? I don't remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.

Comment author: TobyBartels 24 February 2011 02:14:47AM *  1 point [-]

(Spoiler warning!)

The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) "But it's wrong."; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain's control, they were safe because they were too stupid to work it out on their own!)

However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!

Comment author: [deleted] 24 February 2011 02:21:09AM 1 point [-]

That is better than I remembered. Weren't the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?

Comment author: TobyBartels 24 February 2011 02:30:55AM *  2 points [-]

My memory's bad enough now that I had to check Wikipedia. You're right that robots were killing people, but compare this with the background of Will Smith's character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).

Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.

Comment author: [deleted] 24 February 2011 02:41:13AM *  1 point [-]

That is so much better than I remembered that I'm now doubting whether my own insight about Asimov's laws actually predated the movie or not. It's possible that's where I got it from. Although I still think it's sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.

Comment author: TobyBartels 24 February 2011 03:05:34AM 1 point [-]

I still think it's sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.

Yes, you're certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It's just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov's.