I don't know what it would mean for an act to be partition-dependent.
In case anyone hasn't seen this term before (like me half an hour ago) I'll quickly explain:
Let's begin with EDT. The minimum information required to specify an EDT problem consists of a probability space (consisting of 'histories of the world') and two random variables: A - the player's action and U - the utility. Then EDT tells us to choose the value of A such that E(U|A) is maximized.
Now let's ask: What extra information do we need in order to be able to apply CDT? There are lots of equivalent ways of representing it, but essentially we need to be able to define a variable X such that X(w) = "the state of the world w just prior to the agent's action". (More correctly X(w) = "the values of all variables not causally affected by the agent's action".) Then given some fixed action a, the conditional expectation E(U | X and A = a) is uniquely determined by X. CDT tells us to choose the value of a which maximizes Sum(over x) P(X = x)*E(U | X = x and A = a).
For instance, in Newcomb's problem we can take X = 1 or 2 according as the predictor has predicted 1 or 2 boxes. Then whatever the underlying probability distribution, CDT will recommend 2-boxing.
However, if you put X = "correct" if the prediction is correct and "wrong" if the prediction is wrong then CDT may recommend 1-boxing. It's intuitively obvious that we shouldn't define X this way because our action does causally affect whether the prediction is wrong, but that doesn't stop us from blindly plugging in this X and churning out expected utilities.
Hence, the naive CDT formula above fails to be 'partition invariant' because changing the definition of X changes the expected utilities (even reversing their order).
In case the connection between X and "partitions" isn't clear: Given X, we can partition histories of the world according to whether their X-values are equal. Conversely, given a partition of histories of the world, we can define X(w) = the element of the partition containing w.
Again, I have no idea how actions as opposed to expected utilities can fail to be partition-invariant. Can anyone help?
Anyone interested can read the SEP page for causal decision theory (http://plato.stanford.edu/entries/decision-causal/#ParInv). So basically, the expected utility of an action can differ in different partitions. So in:
Partition 1: Act A could lead to 8 utilons and act B to 6. Partition 2: Act A could lead to 7 utilons and B to 5.
This would be a case where the expected utility was partition dependent.
Now imagine partition 3: Act A could lead to 3 utilons and B to 5.
In this case, the partition dependence of the expected utility is passed up to the act which then becomes partition dependent.
Now on that definition it seems that the act in the case of Newcomb's Problem is partition dependent (ie. if sometimes 1 box is recommended and sometimes 2 box is so the dependence is just for the expected utility but also for the act which is rational).
My confusion however was that this didn't seem to be what the SEP article was suggesting as it runs a proof showing that even under this second partition CDT will suggest 2 boxing.
Now I wonder however whether I misunderstood what SEP was saying. Maybe the proof they use to show that CDT reaches the same answer to both partitions in Newcomb's is designed to show how a partition-invariant form of CDT works rather than how a partition-dependent form of CDT works.
The short version of everything above is a question: Is it the case then that Newcomb's Problem is an example of where (some) versions of CDT suffer from partition dependence of acts (as well as utilities)?
And if so, how does this impact the other questions in the original post (ie. does a partition theorem solve this problem and that's that or are there still more issues).
My confusion however was that this didn't seem to be what the SEP article was suggesting as it runs a proof showing that even under this second partition CDT will suggest 2 boxing.
The way they manage that is by defining the expected utility of an action using the "probabilities of conditionals", which are written using the notation P(A > E) where A is an action and E is an event. These "probabilities of conditionals" encode the causal information that CDT relies on.
In my previous comment I described CDT as seeking the value of a which maximizes the expression EU(a) = Sum(over x) P(X = x) * E(U | X = x and A = a), and observed that replacing X with a different random variable may change the expected utility. However, if we change P(X = x) to P(a > (X = x)), so that EU(a) = Sum(over x) P(a > (X = x)) * E(U | X = x and A = a), then EU(a) remains unchanged if we replace X with another random variable X', as long as the pair (X', A) uniquely determines X. (One obvious choice of X' is to just take X'(w) = w. This corresponds to what SEP describes as Sobel's "basic formula".)
What's a bit frustrating about this is that the axioms for "probabilities of conditionals" are never spelled out. However, I suspect that defining P(A > E) for all A and E is equivalent to defining a random variable "X" as in my previous comment:
In one direction, if expressions of the form P(A > E) are well-defined then say that an event E is 'unaffected' if and only if P(A > E) = P(E) for all actions A. Then we can define X(w) = (E(w) : All unaffected events E), which is "the state of the world immediately prior to the action".
In the other direction, if we're given X then we can define P(a > E) as Sum(over x) P(X = x) * P(E | X = x and A = a). Then the meaning of the expression P(a > E) will be: "The probability of E turning out true if we
[Strictly speaking I need to prove that if you define X in terms of P(a > E) and then redefine P(a > E) in terms of X then you get back what you started off with. I don't know how to do that because I don't know what axioms for 'probabilities of conditionals' are. But it works in the Newcomb example.]
Therefore, it seems slightly perverse for SEP to put such emphasis on achieving 'partition independence' when making CDT work at all requires choosing a partition, whether explicitly (e.g. by choosing "X") or implicitly (by defining probabilities of conditionals). It seems like it's just a cosmetic difference.
Maybe the proof they use to show that CDT reaches the same answer to both partitions in Newcomb's is designed to show how a partition-invariant form of CDT works rather than how a partition-dependent form of CDT works.
Yeah, that's what I think.
I'm trying to understand partition dependence in causal decision theories and I'm struggling to think of a case where an act (as opposed to simply the expected utility) is partition dependent. Some detail (very much in order of what I'm wanting to figure out):
1.) I known that Joyce's causal decision theory is partition-invariant but Sobel's and Lewis's theories aren't and require some specification of what partition is adequate. What happens if such a specification isn't provided? More specifically, what's an example of a decision problem where the acts are partition dependent if you don't ensure you use only adequate partitions?
2.) Extending this: If you do make sure to only use adequate partitions, are there still problems with partition-dependence (other than the small world/grand world problem that Joyce talks about)? In other words, do current definitions of adequate partions:
i.) Ensure that no act will be partition dependent in decision problems that can be discussed.
ii.) Allow all decision problems to be discussed.
I guess what I'm trying to figure out is what the problem of decision dependence is. Is the problem that it means you require a definition of adequate partitions (but that such a definition is easy to find and solves the problem)? Or even with such a definition, does partition dependence still cause problems? Are these problems just about small world/grand world stuff or are they about other partition related issues as well?
I can't seem to get my head around it and was hoping some concrete answers to my questions would help. Anyone able to help?