Sewing-Machine comments on What can you do with an Unfriendly AI? - Less Wrong

16 Post author: paulfchristiano 20 December 2010 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

You are viewing a single comment's thread.

Comment author: [deleted] 20 December 2010 10:42:52PM 0 points [-]

Isn't "evil" in evil genie distracting? The plausible unfriendly AI scenarios are not that we will inadvertently create an AI that hates us. It's that the we'll quite advertently create an AI that desires to turn the solar system into a hard disk full of math theorems, or whatever. Optimization isn't evil, just dangerous. Even talking about an AI that "wants to be free" is anthropomorphizing.

Maybe this is just for color, and I'm being obtuse.

Comment author: DSimon 28 December 2010 10:25:00PM 0 points [-]

Agreed that there's a whole mess of mistakes that can result from anthropomorphizing (thank ye hairy gods for spell check) the AI, but I still think this style of post is useful. If we can come up with a fool-proof way of safely extracting information from a malevolent genie, then we are probably pretty close to a way of safely extracting information from a genie that we're unsure of the friendliness of.