Sniffnoy comments on The Logical Fallacy of Generalization from Fictional Evidence - Less Wrong

38 Post author: Eliezer_Yudkowsky 16 October 2007 03:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 24 February 2011 12:06:40AM 4 points [-]

When I try to introduce the subject of advanced AI, what's the first thing I hear, more than half the time?

"Oh, you mean like the Terminator movies / the Matrix / Asimov's robots!"

Doesn't Asimov's Laws provide a convenient entry into the topic of uFAI? I mean, sometime after I actually read the Asimov stories, but well before I discovered this community or the topic of uFAI, it occurred to me in a wave of chills how horrific the "I, Robot" world would actually be if those laws were literally implemented in real-life AI. "A robot may not injure a human being or, through inaction, allow a human being to come to harm"? But we do things all the time that may bring us harm--from sexual activity (STDs!) to eating ice cream (heart disease!) to rock-climbing or playing competitive sports... If the robots were programmed in such a way that they could not "through inaction, allow a human being to come to harm" then they'd pretty much have to lock us all up in padded cells, to prevent us taking any action that might bring us harm. Luckily they'd only have to do it for one generation because obviously pregnancy and childbirth would never be allowed, it'd be insane to allow human women to take on such completely preventable risks...

So then when I found you lot talking about uFAI, my reaction was just nodnod rather than "but that's crazy talk!"

Comment author: Sniffnoy 24 February 2011 12:11:42AM 1 point [-]

AKA the premise of "With Folded Hands". :)

Comment author: [deleted] 24 February 2011 12:31:36AM 0 points [-]

I haven't read that but yes, it sounds like exactly the same premise.