XiXiDu comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 25 November 2010 04:11:35PM 1 point [-]

Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances.

This does not follow. To use a crude example, if I have a fast procedure to test if a number is prime then I don't need to simulate a slower algorithm to know what the slower one will output. This may raise deep issues about what it means to be "you"- arguably any algorithm which outputs the same data is "you" and if that's the case my argument doesn't hold water. But the AI in question doesn't need to simulate you perfectly to predict your large-scale behavior.

Comment author: XiXiDu 25 November 2010 04:19:53PM 1 point [-]

If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post. It was a crude guess that for an AI to be able to credibly threat you with simulated torture in many cases it would also use this capability to arrive at the most detailed data of your expected decision procedure.

Comment author: DSimon 18 February 2012 06:03:39AM 0 points [-]

If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post.

Only if there isn't a non-conscious algorithm that has the same effect on our decisions. Which seems likely to be the case; it's certainly possible to make a p-zombie if you can redesign the original brain all you want.