You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one.
Yes. That's basically the definition of CDT. That's also why CDT is no good. You can quibble about the word but in "the literature", 'CDT' means just that.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You don't promote C to the action node, it is the action node. That's the way the decision problem is specified: do you one-box or two-box? If you don't accept that, then you're talking about a different decision problem. But in Newcomb's problem, the algorithm is trying to decide that. It's not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend - as a means of reaching a decision about C - that it's deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.