eli_sennesh comments on Timelessness as a Conservative Extension of Causal Decision Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (65)
I found that handling the Counterfactual Mugging "correctly" (according to Eliezer's intuitive argument of retroactively acting on rational precommitments) requires different machinery from other problems. You're right that we don't seem to be "using" the last one, if we act under weak entanglement, and won't pay Omega $100.
The problem is that in Eliezer's original specification of the problem, he explicitly noted that, unknown to us as the player, the coin is basically weighted. Omega isn't a liar, but there aren't even any significant quantity of MWI timelines in which the coin comes up heads and Parallel!Us actually receives the money. We're trying to decide the scenario in a way that favors a version of our agent who never exists outside Omega's imagination.
I understand the notion behind this - act now according to precommitments it would have been rational to make in the past - but my own intuitions label giving Omega the money an outright loss of $100 with no real purpose, given the knowledge that the coin cannot come up heads.
This might just mean I have badly-trained intuitions! After all, if I switch mental "scenarios" to Omega being not merely a friendly superintelligence or Time Lord but an actual Trickster Matrix Lord, then all of a sudden it seems plausible that I am the prediction copy, and that "real me" might still have a chance at $1000, and I should thus pay Omega my imaginary and worthless simulated money.
The problem is, that presupposes my being willing to believe in some other universe entirely outside my own (ie: outside the simulation) in which Omega's claim to have already flipped the coin and gotten tails is simply not true. It makes Omega at least a partial liar. It confuses the hell out of me, personally.
Another version of the entanglement proposition might be able to handle this, but it sacrifices the transitivity of entanglement (to what loss, I haven't found out):
On the upside, unlike "strong entanglement", it won't trivially lose on the Prisoners' Dilemma.
Assume that the causal Bayes nets given as input to our decision algorithm contain only indexical uncertainty.