In this post, I'd like to examine whether Updateless Decision Theory can provide any insights into anthropic reasoning. Puzzles/paradoxes in anthropic reasoning is what prompted me to consider UDT originally and this post may be of interest to those who do not consider Counterfactual Mugging to provide sufficient motivation for UDT.
The Presumptuous Philosopher is a thought experiment that Nick Bostrom used to argue against the Self-Indication Assumption. (SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.)
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: "Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher runs the God’s Coin Toss thought experiment and explains Model 3)!"
One suspects the Nobel Prize committee to be a bit hesitant about awarding the presumptuous philosopher the big one for this contribution.
To make this example clearer as a decision problem, let's say that the consequences of carrying out the "simple experiment" is a very small cost (one dollar). And the consequences of just assuming T2 is a disaster down the line if T1 turns out to be true (we create a power plant based on T2, and it blows up and kills someone).
In UDT, no Bayesian updating occurs, and in particular, you don't update on the fact that you exist. Suppose in CDT you have a prior P(T1) = P(T2) = .5 before taking into account that you exist, then translated into UDT you have Σ P(Vi) = Σ P(Wi) = .5, where Vi and Wi are world programs where T1 and T2 respectively hold. Anthropic reasoning occurs as a result of considering the consequences of your decisions, which are a trillion times greater in T2 worlds than in T1 worlds, since your decision algorithm S is called about a trillion times more often in Wi programs than in Vi programs.
Perhaps by now you've notice the parallel between this decision problem and Eliezer's Torture vs. Dust Specks. The very small cost of the simple physics experiment is akin to getting a dust speck in the eye, and the disaster of wrongly assuming T2 is akin to being tortured. By not doing the experiment, we can save one dollar for a trillion individuals in exchange for every individual we kill.
In general, Updateless Decision Theory converts anthropic reasoning problems into ethical problems. I can see three approaches to taking advantage of this:
- If you have strong epistemic intuitions but weak moral intuitions, then you can adjust your morality to fit your epistemology.
- If you have strong moral intuitions but weak epistemic intuitions, then you can adjust your epistemology to fit your morality.
- You might argue that epistemology shouldn't be linked so intimately with morality, and therefore this whole approach is on the wrong track.
Personally, I have vacillated between 1 and 2. I've argued, based on 1, that we should discount the values of individuals by using a complexity-based measure. And I've also argued, based on 2, that perhaps the choice of an epistemic prior is more or less arbitrary (since objective morality seems unlikely to me). So I'm not sure what the right answer is, but this seems to be the right track to me.
You don't "update" on your own mathematical computations either.
The data you construct or collect is about what you are, and by extension what your actions are and thus what is their effect, not about what is possible in the abstract (more precisely: what you could think possible in other situations). That's the trick with mathematical uncertainty: since you can plan for situations that turn out to be impossible, you need to take that planning into account in other situations. This is what you do by factoring the impossible situations in the decision-making: accounting for your own planning for those situations, in situations where you don't know them to be impossible.
I don't get this either, sorry. Can you give an example where "You don't "update" on your own mathematical computations either" makes sense?
Here's how I see CM-with-math-coin goes in more detail. I think we should ask the question, suppose you think that Omega may come in a moment to CM you using the n-th bit of pi, what would you prefer your future self to do, assuming that you can compute n-th bit of pi, either now or later? If you can compute it now, clearly you'd prefer your future self to not give $100 to Omega if the bit is 0.
What... (read more)