kongus_bongus

Wiki Contributions

Comments

Sorted by

Thank you so much, this is exactly what I was looking for. It's reassuring to know I'm not crazy and other people have thought of this before.

I think you're misunderstanding something, but I can't quite pin down what it is. For clarity, here is my analysis of the events in the thought experiment in chronological order:
1. Omega decides to host a Newcomb's problem, and chooses an agent (Agent A) to participate in it.
2. Omega scans Agent A and simulates their consciousness (call the simulation Agent B), placing it in a "fake" Newcomb's problem situation (e.g. Omega has made no prediction about Agent B, but says that it has in the simulation in order to get a result)
3. Agent B makes its decision, and Omega makes its prediction based on that
4. Omega shows itself to Agent A and initiates Newcomb's problem in the real world, having committed to its prediction in step 3
5. Agent A makes its decision and Newcomb's problem is done.

From a third-party perspective, there is no backward causality. The decision of Agent B influences Omega's prediction, but the decision of Agent A does not. Likewise, the decision of Agent B does not influence the decision of Agent A, as it is hidden by Omega (this is why the simulation part of the EV calculations assumes a uniform prior over the decision of Agent A). There is no communication or causal influence between the simulation and reality besides the simulation influencing Omega's prediction. The sole factor that makes it appear as though there is some kind of backward causality is that subjectively, neither agent knows whether they are Agent A or Agent B, and so they each act as though there is a 50% chance that they have forward causal influence over Omega's prediction - not the prediction that Omega purports to already have made, since there is no way to influence that, but the prediction that Omega will make in the real world based on Agent B's decision. That is, the only sense in which the agent in my post has causal influence over Omega's decision is that in the case that they are Agent B, they will make their choice, find that the whole thing was fake and the boxes are full of static or something, they will cease to exist as the simulation is terminated, and then their decision will influence the prediction Omega claims to have made to Agent A.

I suspect the misunderstanding here is that I was too vague with the wording of the claim that "in the case that the agent is a simulation, its choice actually does have a causal influence on the 'real' prediction". I hope that the distinction between Agents A and B clears up what I'm saying.

Yes, perhaps that sentence wasn't the best way to convey my point. This version of CDT does not in any way acknowledge backward-causality or subjunctive dependence; the idea is that since the predictor runs a simulation of the agent before deciding on its prediction, there is a forward causal influence on the predictor's action in the case that the agent is a simulation.