This is part of a sequence titled "An introduction to decision theory". The previous post was Newcomb's Problem: A problem for Causal Decision Theories
For various reasons I've decided to finish this sequence on a seperate blog. This is principally because there were a large number of people who seemed to feel that this sequence either wasn't up to the Less Wrong standard or felt that it was simply covering ground that had already been covered on Less Wrong.
The decision to post it on another blog rather than simply discontinuing it came down to the fact that other people seemed to feel that the sequence had value. Those people can continue reading it at "The Smoking Lesion: A problem for evidential decision theory".
Alternatively, there is a sequence index available: Less Wrong and decision theory: sequence index
I find that the term "cause" or "causality" can be very misleading in this situation.
As a matter of terminology, I actually agree with you: in lay speech, I see nothing wrong with saying that "One-boxing causes the sealed box to be filled", because this is exactly how we perceive causality in the world.
However, when speaking of these problems, theorists nail down their terminology as best they can. And in such problems, standard usage is such that the concept of causality only applies to cases where an event changes things solely in the future[1], not merely where it reveals you to be in a situation in which a past event has happened.
When speaking of decision-theoretic problems, it is important to stick to this definition of causality, counter-intuitive though it may be.
Another example of the distinction is in Drescher's Good and Real. Consider this: if you raise your hand (in a deterministic universe), you are setting the universe's state 1 billion years ago to be such that a chain of events will unfold in a way that, 1 billion years later, you will raise your hand. In a (lay) sense, raising your hand "caused" that state.
However, because that state is in the past, it violates decision-theoretic usage to say that you caused that state; instead, you should simply say that either:
a) there is an acausal relationship between your choice to raise your hand and that state of the universe, or
b) by choosing to raise your hand, you have learned about a past state of universe. (Just as deciding whether to exit in the Absent-Minded Driver problem tells you something about which exit you are at.)
[1] or, in timeless formalisms, where the cause screens off that which it causes.
I think you've misunderstood me. "What you will choose" is a fact that exists before omega fills the boxes.
This fact determines how the boxes are filled.
"What you will choose" (some people seem to refer to this, or something similar, as your "disposition", but I find my terminology more immediately apparent) causes the future event "how the boxes are filled"