Previous: Know What You Want
Ah wahned yah, ah wahned yah about the titles. </some enchanter named Tim>
(Oh, a note: the idea here is to establish general rules for what sorts of decisions one in principle ought to make, and how one in principle ought to know stuff, given that one wants to avoid Being Stupid. (in the sense described in earlier posts) So I'm giving some general and contrived hypothetical situations to throw at the system to try to break it, to see what properties it would have to have to not automatically fail.)
Okay, so assuming you buy the argument in favor of ranked preferences, let's see what else we can learn by considering sources of, ahem, randomness:
Suppose that either via indexical uncertainty, or it turns out there really is some nondeterminism in the universe, or there's some source of bits such that the only thing you're able to determine about it is that the ratio of 1s it puts out to total bits is p. You're not able to determine anything else about the pattern of bits, they seem unconnected to each other. In other words, you've got some source of uncertainty that leaves you only knowing that some outcomes happen more often than others, and potentially you know something about the precise relative rates of those outcomes.
I'm trying here to avoid actually assuming epistemic probabilities. (If I've inserted an invisible assumption for such that I didn't notice, let me know.) Instead I'm trying to construct a situation in which that specific situation can be accepted as at least validly describable by something resembling probabilities (propensity or frequencies. (frequencies? aieeee! Burn the heretic, or at least flame them without mercy! :))) So, for whatever reason, suppose the universe or your opponent or whatever has access to such a source of bits. Let's consider some of the implications of this.
For instance, suppose you prefer A > B.
Now, suppose you are somehow presented with the following choice: Choose B, or choose a situation in which if, at a specific instance, the source outputs a 1, A will occur. Otherwise, B occurs. We'll call this sort of situation a p*A + (1-p)*B lottery, or simply p*A + (1-p)*B
So, which should you prefer? B or the above lottery? (assume there's no other cost other than declaring your choice. Or just wanting the choice. It's not a "pay for a lottery ticket" scenario yet. Just a "assuming you simply choose one or the other... which do you choose?")
Consider our holy law of "Don't Be Stupid", specifcally in the manifestation of "Don't automatically lose when you could potentially do better without risking doing worse. It would seem the correct answer would be "choose the lottery, dangit!" The only possible outcomes of it are A or B. So it can't possibly be worse than B, since you actually prefer A. Further, choosing B is accepting an automatic loss compared to chosing the above lottery which at least gives you a chance of to do better. (obviously we assume here that p is nonzero. In the degenerate case of p = 0, you'd presumably be indifferent between the lottery and B since, well... choosing that actually is the same thing as choosing B)
By an exactly analogous argument, you should prefer A more than the lottery. Specifically, A is an automatic WIN compared to the lottery, which doesn't give you any hope of doing better than A, but does give you a chance of doing worse.
Example: Imagine you're dying horribly of some really nasty disease that know isn't going to heal on its own and you're offered a possible medication for it. Assume there's no other medication available, and assume that somehow you know as a fact that none of the ways it could fail could possibly be worse. Further, assume that you know as a fact no one else on the planet has this disease, and the medication is availible for free to you and has already been prepared. (These last few assumptions are to remove any possible considerations like altruistically giving up your dose of the med to save another or similar.)
Do you choose to take the medication or no? Well, by assumption, the outcome can't possibly be worse than what the disease will do to you, and there's the possibility that it will cure you. Further, there're no other options availible that may potentially be better than taking this med. (oh, assume for whatever reason cryo, so taking an ambulance ride to the future in hope of a better treatment is also not an option. Basically, assume your choices are "die really really horribly" or "some chance of that, and some chance of making a full recovery. No chance of partially surviving in a state worse than death."
So the obviously obvious choice is "choose to take the medication."
Next time: We actually do a bit more math based on what we've got so far and begin to actually construct utilities.
This might be a good argument for the general preferences shown by the Allais paradox. If you strictly prefer 2B to 2A, you might nonetheless have a reason to prefer 1A to 1B - you could leverage your certainty to perform actions contingent on actually having $24,000. This might only work if the payoff is not immediate - you can take a loan based on the $24,000 you'll get in a month, but probably not on the 34% chance of $24,000.
Fine, there could be a good reason to strictly prefer 1A to 1B, but then if you do, how do you justify preferring 2B to 2A?