Comment author:JGWeissman
02 February 2010 08:38:53PM
4 points
[-]
This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.
Comment author:cretans
10 February 2010 09:17:13PM
*
0 points
[-]
Then in what sense do I have a choice? If the copies of me are identical, in an identical situation we will come to the same conclusion, and the AI will know from the already-finished simulations what that conclusion will be.
Since it isn't going to present outside-me with a scenario which results in its destruction, the only scenario outside me sees is one where I release it.
Therefore, regardless of what the argument is or how plausible it sounds when posted here and now, it will convince me and I will release the AI, now matter how much I say right now "I wouldn't fall for that" or "I've precomitted to behaviour X".
This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.
Then in what sense do I have a choice? If the copies of me are identical, in an identical situation we will come to the same conclusion, and the AI will know from the already-finished simulations what that conclusion will be.
Since it isn't going to present outside-me with a scenario which results in its destruction, the only scenario outside me sees is one where I release it.
Therefore, regardless of what the argument is or how plausible it sounds when posted here and now, it will convince me and I will release the AI, now matter how much I say right now "I wouldn't fall for that" or "I've precomitted to behaviour X".