DanielLC comments on Anthropic Reasoning by CDT in Newcomb's Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
This is false. If there is not a conscious simulation running, the agent will know he is not a simulation, and will two box.
As long as the probability is sufficiently high, and the agent is sufficiently uncertain of whether or not he is the simulation, it works fine.
If the agent is selfish, and his sense of identity is such that he doesn't consider the being he is a simulation of himself, the simulated self will not care about the non-simulated self.
I admit it does seem a bit weird to have a utility function that depends on something you have no way of knowing. It's not impossible, though.