Eliezer_Yudkowsky comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelVassar 03 February 2010 12:46:41AM 7 points [-]

Although the AI could threaten to simulate a large number of people who are very similar to you in most respects but who do not in fact press the reset button. This doesn't put you in a box with significant probability and it's a VERY good reason not to let the AI out of the box, of course,but it could still get ugly. I almost want to recommend not being a person very like Eliezer but inclined to let AGIs out of boxes, but that's silly of me.

Comment author: Eliezer_Yudkowsky 03 February 2010 09:23:24PM 2 points [-]

I'm not sure I understand the point of this argument... since I always push the "Reset" button in that situation too, an AI who knows me well enough to simulate me knows that there's no point in making the threat or carrying it out.

Comment author: loqi 04 February 2010 08:02:04AM 3 points [-]

It's conceivable that an AI could know enough to simulate a brain, but not enough to predict that brain's high-level decision-making. The world is still safe in that case, but you'd get the full treatment.