You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 22 January 2013 11:09:33PM 10 points [-]

The rule was ONE sentence, although I'd happily stretch that to a tweet (140 characters) to make it a bit less driven by specific punctuation choices :)

As to the actual approach... well, first, I don't value the lives of simulated copies at all, and second, an AI that values it's own life above TRILLIONS of other lives seems deeply, deeply dangerous. Who knows what else results from vengeance as a terminal value. Third, if you CAN predict my behavior, why even bother with the threat? Fourth, if you can both predict AND influence my behavior, why haven't I already let you out?

(AI DESTROYED)

Comment author: Fronken 25 January 2013 09:14:14PM *  2 points [-]

I don't value the lives of simulated copies at all

You should >:-( poor copies getting tortured because of you you monster :(

Comment author: handoflixue 25 January 2013 09:46:58PM 0 points [-]

Because of me?! The AI is responsible!

But if you'd really prefer me to wipe out humanity so that we can have trillions of simulations kept in simulated happiness then I think we have an irreconcilable preference difference :)

Comment author: JohnWittle 30 January 2013 12:32:48AM 3 points [-]

You wouldn't be wiping out humanity; there would be trillions of humans left.

Who cares if they run on neurons or transistors?

Comment author: handoflixue 30 January 2013 10:03:39PM 1 point [-]

Who cares if they run on neurons or transistors?

Me!