The subject has already been raised in this thread, but in a clumsy fashion. So here is a fresh new thread, where we can discuss, calmly and objectively, the pros and cons of the "Oxford" version of the Many Worlds interpretation of quantum mechanics.
This version of MWI is distinguished by two propositions. First, there is no definite number of "worlds" or "branches". They have a fuzzy, vague, approximate, definition-dependent existence. Second, the probability law of quantum mechanics (the Born rule) is to be obtained, not by counting the frequencies of events in the multiverse, but by an analysis of rational behavior in the multiverse. Normally, a prescription for rational behavior is obtained by maximizing expected utility, a quantity which is calculated by averaging "probability x utility" for each possible outcome of an action. In the Oxford school's "decision-theoretic" derivation of the Born rule, we somehow start with a ranking of actions that is deemed rational, then we "divide out" by the utilities, and obtain probabilities that were implicit in the original ranking.
I reject the two propositions. "Worlds" or "branches" can't be vague if they are to correspond to observed reality, because vagueness results from an object being dependent on observer definition, and the local portion of reality does not owe its existence to how we define anything; and the upside-down decision-theoretic derivation, if it ever works, must implicitly smuggle in the premises of probability theory in order to obtain its original rationality ranking.
Some references:
"Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP" by David Wallace. In this paper, Wallace says, for example, that the question "how many branches are there?" "does not... make sense", that the question "how many branches are there in which it is sunny?" is "a question which has no answer", "it is a non-question to ask how many [worlds]", etc.
"Quantum Probability from Decision Theory?" by Barnum et al. This is a rebuttal of the original argument (due to David Deutsch) that the Born rule can be justified by an analysis of multiverse rationality.
The latest attempt at a decision-theoretic account of QM probabilities is David Wallace's, here: http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf . I mention this because this proof is not susceptible to the criticisms that Barnum et al. raise against Deutsch's proof.
If we're going to be talking about the approach, it's worth getting some sense of the argument. Below, I've reproduced a very non-technical summary. I describe the decision problem, the assumptions (which Wallace thinks are intuitive constraints on rational decision-making, although I'm not sure I agree), and the representation theorem itself. It is a remarkable result. The assumptions seem fairly weak but the theorem is striking. To get the gist of the theorem, scroll down to the bolded part. If it seems that it couldn't possibly be true, look at the assumptions and think about which one you want to reject, because the theorem does follow from (appropriately formalized versions of) these assumptions.
The Decision Problem
The agent is choosing between different preparation-measurement-payment (or p-m-p) sequences (Wallace calls them acts, but this terminology is counter-intuitive, so I avoid it). In each sequence, some quantum state is prepared, then it is measured in some basis, and then rewards are doled out to the agent's future selves on the basis of the measurement outcomes in their respective branches. An example sequence: a state is prepared in the superposition 1/2 |up> + sqrt(3/4) |down>, a measurement is made in the up-down basis, then the future self of the agent in the |up> branch is given a reward and the future self in the |down> branch is not.
The agent has a preference ordering over all possible p-m-p sequences. Of course, in any particular decision problem, only some of the possible sequences will be actual options. For example, if the agent is betting on outcomes of a pre-prepared and pre-measured state, then she is choosing between sequences that only differ in the "payment" part of "preparation-measurement-payment".
The Assumptions
One can always set up a p-m-p sequence where a state is prepared, measured, and then the agent is rewarded regardless of the measurement outcome.
Arbitrary quantum superpositions can be prepared.
After p-m-p sequence is completed, any record of the measurement outcomes can always be erased. Two different p-m-p sequences could lead to the same macroscopic states after such an erasure is performed as long as they differ only in the measurement outcomes, not in the quantum amplitudes and payments associated with those outcomes.
For a given initial macrostate, the agent's preferences define a total ordering over the set of possible p-m-p sequences.
The agent's preferences are diachronically consistent. Let's say a sequence U takes place between times t0 and t1. At t1, there will be branches corresponding to the different outcomes associated with U. Xi and Yi are different p-m-p sequences that could be performed at t1 in the i'th branch. If the agent in the i'th branch prefers Xi over Yi, then the pre-branching agent at time t0 prefers U followed by Xi over U followed by Yi.
The agent cares only about the macroscopic state of the world. She doesn't prefer one microscopic state over another if they correspond to the same macroscopic state.
The agent doesn't care about branching per se. She doesn't consider the mere multiplication of future selves in distinct macroscopic states valuable in itself.
In the Everettian framework, p-m-p sequences are implemented by unitary transformations. If there are two different unitary transformations that have the same effect on the agent's branch (but differ in their effect on other branches), the agent is indifferent between them.
The Representation Theorem
The preference ordering over sequences induces a preference ordering over rewards, because for any two rewards R1 and R2, there are p-m-p sequences which lead to R1 for all branches and R2 for all branches. If any sequence of the first kind is preferred over a sequence of the second kind, then reward R1 is preferred over reward R2.
Given a preference ordering over the rewards, there is a unique (up to affine transformations) utility function over the rewards. If the agent is to use standard decision theory to reason about which p-m-p sequences to choose in order to maximize her expectation of reward utility, and we want the expected utilities of the p-m-p sequences to reflect to the agent's given preferences over those sequences, then the probability distribution over outcomes we use when calculating the expected utility of p-m-p sequences must be given by the Born rule.
Before I try to parse this argument: do you really think this line of reasoning can explain why there are dark regions in the double-slit experiment? Are you really going to explain that in terms of the utility function of a perceiving agent?!