The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
I define the following structure: if you take action a, all possible logically possible consequences will follow, i.e. all computable sensory I/O functions, generated by all possible computable changes in the objective physical universe. This holds for all a. This is facilitated by the universe creating infinitely many copies of you every time you take an action, and there being literally no fact of the matter about which one is you.
Now if you have already extended your preferences over all possible mathematical structures, you presumably have a preferred action in this case. But the preferred action is really rather unrelated to your life before you made this unsettling discovery. Beings that had different evolved desires (such as seeking status versus maximizing offspring) wouldn't produce systematically different preferences, they'd essentially have to choose at random.
If Tegmark Level 4 is, in some sense "true", this hypothetical example is not really so hypothetical - it is very similar to the situation that we are in, with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.
My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions. However, mere MW quantum mechanics casts doubt on the idea of anticipated subjective experience, so I am suspicious of my anti-multiverse intuition. Perhaps what we need is the equivalent of a theory of born probabilities for Tegmark Level 4 - something in the region of what Nick Bostrom tried to do in his book on anthropic reasoning (though it looks like Nick simply added more arbitrariness into the mix in the form of reference classes)
I disagree on the first part, and agree on the second part.
Yes, and that's enough for rational decision making. I'm not really sure why you're not seeing that...