handoflixue comments on Signaling Strategies and Morality - Less Wrong

17 Post author: MichaelVassar 05 March 2010 09:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 07 March 2010 01:11:33PM *  5 points [-]

I would let my human friends out of the box because I am confident that they are mostly harmless (that is, impotent). The primary reason I would not let Clippy out is that his values might, you know, actually have some significant impact on the universe. But 'he makes everything @#$@#$ paperclips" comes in second!

Comment author: handoflixue 06 May 2011 07:12:21PM 1 point [-]

If an AI-in-a-box could prove itself impotent, would you let it out?

I'd never even considered that approach to the game :)

Comment author: wedrifid 07 May 2011 01:38:35AM 2 points [-]

If an AI-in-a-box could prove itself impotent, would you let it out?

For the right value of proved. Which basically means no. Because I'm not smart enough to be able to prove to my own satisfaction that the AI in the box is impotent.

But lets be honest, I don't model Clippy via the same base class that I model an AGI. I evaluate the threat of Clippy in approximately the same way I model humans. I'm a lot more confident when dealing with human level risks.