private_messaging comments on AI risk, new executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
I was going to post this story in the open thread, but it seems relevant here:
So my partner and I went to see the new Captain America movie, and at one point there is a scene involving an AI/mind upload, along with a mention of an Operation Paperclip. And my first thought was "Is that a real thing, or is someone on the writing staff a Less Wronger doing a shoutout? Because that would be awesome."
Turns out it was a real thing. :-( Oh well.
Something more interesting happened afterward. I mentioned the connection to my partner, said paperclips were an inside joke here. She asked me to explain, so I gave her a (very) brief rundown of some LW thought on AI to provide context for the concept of a paperclipper. Part of the conversation went like this:
"So, next bit of context, just because an AI isn't actively evil doesn't mean it won't try to kill us."
To which she responded:
"Well, of course not. I mean, maybe it decides killing us will solve some other problem it has."
And I thought: That click Eliezer was talking about in the Sequences? This seems like a case of it. What makes it interesting is that my partner doesn't have a Mensa-class intellect or any significant exposure to the Less Wrong memeplex. Which suggests that clicking on the dangers of...call it non-ethical AI, as opposed to un-ethical, unless there's already a more standard term for the class of AI's that contains paperclippers but not Skynet...isn't limited to the high-IQ bubble.
That may not be news to MIRI, but it seemed worth commenting about here. Because we are a high IQ bubble. And that's part of why I like coming here. But I'm sure MIRI would be pleased to reach outside the bubble.
(of interest: Obviously the first connection she drew from dangerous AI was Skynet...but once I described the idea of an AI that was neutral-but-still-dangerous, the second connection she made was to Kyubey. And that felt sort-of-right to me. I told her that was the right idea but didn't go far enough.)
Skynet kills people as secondary to it's self preservation, too.
Perhaps it is just a very banal insight that doesn't really shed any light on what an AI is likely to do.