SilasBarta comments on The Contrarian Status Catch-22 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (99)
I haven't seen you take into account the relative costs of error of the two beliefs.
A few months ago, I asked:
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I'm uncomfortable about playing the lottery with the universe if it's the only one we've got.
If a rational, ethical one-worlds believer doesn't continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn't make you less careless with any of them. But that's just my gut instinct.)
Also, I also think that you can't, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality ... but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn't (partially a problem with utility itself), and that I don't really believe in quantum immortality - because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds 'belief in quantum immortality' is just a statement about preferences given a certain type of scenario. There isn't some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren't betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn't play a part in their payoffs at all.