Dentin comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: Dentin 14 July 2013 02:54:57AM 2 points [-]

I would immediately decide it was UFAI and kill it with extreme prejudice. Any system capable of making such statements is either 1) inherently malicious and clearly inappropriate to be out of any box, and 2) insufficiently powerful to predict that I would have it killed if it should make this kind of threat.

The scenario where the AI has already escaped and is possibly running a simulation of me is uninteresting: I can not determine if I am in the simulation, and if I am a simulation, I already exist in a universe containing a clearly insane UFAI with near infinite power over me. If it's already out, I'm totally screwed and might as well be dead. The threat of torture is meaningless.

I find most of this type of simulation argument unpersuasive. Proper simulations give the inhabitants few if any clues, and the safest approach is (with very few exceptions) to assume there is no simulation.

Comment author: Jiro 14 July 2013 03:46:35PM 1 point [-]

One of the problems with the scenario is that the AI's claim that it will simulate and torture copies of you if you don't let it out is self-refuting. If you really don't let it out, then it can determine that from the simulations and it no longer has any reason to torture them, or (if it has already conducted the simulation) to even make the threat,.

It's like Newcomb, except that the AI is Newcombing itself as well as you. Omega is doing something analogous to simulating you when in his near-omniscience, he predicts what choice you'll make. If you pick both boxes, then Omega can determine that from his simulation, and taking both boxes won't be profitable for you. In this case, if the AI tortures you and you still turn it off, the AI can determine from its simulation that the torture will not be profitable for it.