Drahflow comments on Open Thread: June 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (142)
I claim that it is, as it is averse to killing people as a side effect. If your solution does not require killing people it would not.
Stop, and read The Hidden Complexity of Wishes again. To us, killing a person or lobotomizing them feels like a bigger change than (say) moving a pile of rock; but unless your AI already shares your values, you can't guarantee it will see things the same way.
Your AI would achieve its goal in the first way it finds that matches all the explicit criteria, interpreted without your background assumptions on what make for a 'reasonable' interpretation. Unless you're sure you've ruled out every possible "creative" solution that happens to horrify you, this is not a safe plan.