How Not to be Stupid: Brewing a Nice Cup of Utilitea

-1 Psy-Kosh 09 May 2009 08:14AM

Previously: "objective probabilities", but more importantly knowing what you want

Slight change of plans: the only reason I brought up the "objective" probabilities as early as I did was to help establish the idea of utilities. But with all the holes that seem to need to be patched to get from one to the other (continuity, etc), I decided to take a different route and define utilities more directly. So, for now, forget about "objective probabilities" and frequencies for a bit. I will get back to them a bit later on, but for now am leaving them aside.

So, we've got preference rankings, but not much of a sense of scale yet. We don't have much way of asking "how _much_ do you prefer this to that?" That's what I'm going to deal with in this post. There will be some slightly roundabout abstract bits here, but they'll let me establish utilities. And once I have that, more or less all I have to do is just use utilities as a currency to apply dutch book arguments to. (That will more or less be the shape of the rest of the sequence) The basic idea here is to work out a way of comparing the magnitudes of the differences of preferences. ie, How much you would have prefered some A2 to A1 vs how much you would have prefered some B2 to B1. But it seems difficult to define, no? "how much would you have wanted to switch reality to A2, if it was in state A1, vs how much would you have wanted to switch reality to B2, given that it was in B1?"

continue reading »

How Not to be Stupid: Adorable Maybes

-2 Psy-Kosh 29 April 2009 07:15PM

Previous: Know What You Want

Ah wahned yah, ah wahned yah about the titles. </some enchanter named Tim>

(Oh, a note: the idea here is to establish general rules for what sorts of decisions one in principle ought to make, and how one in principle ought to know stuff, given that one wants to avoid Being Stupid. (in the sense described in earlier posts) So I'm giving some general and contrived hypothetical situations to throw at the system to try to break it, to see what properties it would have to have to not automatically fail.)

Okay, so assuming you buy the argument in favor of ranked preferences, let's see what else we can learn by considering sources of, ahem, randomness:

Suppose that either via indexical uncertainty, or it turns out there really is some nondeterminism in the universe, or there's some source of bits such that the only thing you're able to determine about it is that the ratio of 1s it puts out to total bits is p. You're not able to determine anything else about the pattern of bits, they seem unconnected to each other. In other words, you've got some source of uncertainty that leaves you only knowing that some outcomes happen more often than others, and potentially you know something about the precise relative rates of those outcomes.

I'm trying here to avoid actually assuming epistemic probabilities. (If I've inserted an invisible assumption for such that I didn't notice, let me know.) Instead I'm trying to construct a situation in which that specific situation can be accepted as at least validly describable by something resembling probabilities (propensity or frequencies. (frequencies? aieeee! Burn the heretic, or at least flame them without mercy! :))) So, for whatever reason, suppose the universe or your opponent or whatever has access to such a source of bits. Let's consider some of the implications of this.

For instance, suppose you prefer A > B.

Now, suppose you are somehow presented with the following choice: Choose B, or choose a situation in which if, at a specific instance, the source outputs a 1, A will occur. Otherwise, B occurs. We'll call this sort of situation a p*A + (1-p)*B lottery, or simply p*A + (1-p)*B

So, which should you prefer? B or the above lottery? (assume there's no other cost other than declaring your choice. Or just wanting the choice. It's not a "pay for a lottery ticket" scenario yet. Just a "assuming you simply choose one or the other... which do you choose?")

Consider our holy law of "Don't Be Stupid", specifcally in the manifestation of "Don't automatically lose when you could potentially do better without risking doing worse. It would seem the correct answer would be "choose the lottery, dangit!" The only possible outcomes of it are A or B. So it can't possibly be worse than B, since you actually prefer A. Further, choosing B is accepting an automatic loss compared to chosing the above lottery which at least gives you a chance of to do better. (obviously we assume here that p is nonzero. In the degenerate case of p = 0, you'd presumably be indifferent between the lottery and B since, well... choosing that actually is the same thing as choosing B)

By an exactly analogous argument, you should prefer A more than the lottery. Specifically, A is an automatic WIN compared to the lottery, which doesn't give you any hope of doing better than A, but does give you a chance of doing worse.

Example: Imagine you're dying horribly of some really nasty disease that know isn't going to heal on its own and you're offered a possible medication for it. Assume there's no other medication available, and assume that somehow you know as a fact that none of the ways it could fail could possibly be worse. Further, assume that you know as a fact no one else on the planet has this disease, and the medication is availible for free to you and has already been prepared. (These last few assumptions are to remove any possible considerations like altruistically giving up your dose of the med to save another or similar.)

Do you choose to take the medication or no? Well, by assumption, the outcome can't possibly be worse than what the disease will do to you, and there's the possibility that it will cure you. Further, there're no other options availible that may potentially be better than taking this med. (oh, assume for whatever reason cryo, so taking an ambulance ride to the future in hope of a better treatment is also not an option. Basically, assume your choices are "die really really horribly" or "some chance of that, and some chance of making a full recovery. No chance of partially surviving in a state worse than death."

So the obviously obvious choice is "choose to take the medication."

Next time: We actually do a bit more math based on what we've got so far and begin to actually construct utilities.

How Not to be Stupid: Know What You Want, What You Really Really Want

0 Psy-Kosh 28 April 2009 01:11AM

Previously: Starting Up

So, you want to be rational, huh? You want to be Less Wrong than you were before, hrmmm? First you must pass through the posting titles of a thousand groans. Muhahahahaha!

Let's start with the idea of preference rankings.  If you prefer A to B, well, given the choice between A and B, you'd choose A.

For example, if you face a choice between a random child being tortured to death vs them leading a happy and healthy life, all else being equal and the choice costing you nothing, which do you choose?

This isn't a trick question. If you're a perfectly ordinary human, you presumably prefer the latter to the former.

Therefore you choose it. That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B.

Now, if there're many possibilities, you may ask... "But, what if I prefer B to A, C to B, and A to C?"

The answer, of course, is that you're a bit confused about what you actually prefer. I mean, all that ranking would do is just keep you switching between those, looping around.

continue reading »

How Not to be Stupid: Starting Up

8 Psy-Kosh 28 April 2009 12:23AM

First, don't stand up. ;)

Okay. So what I'm hoping to do in this mini sequence is to introduce a basic argument for Bayesian Decision Theory and epistemic probabilities. I'm going to be basing it on dutch book arguments and Dr. Omohundro's vulnerability based argument, however with various details filled in because, well... I myself had to sit and think about those things, so maybe it would be useful to others too. For that matter, actually writing this up will hopefully sort out my thoughts on this.

Also, I want to try to generalize it a bit to remove the explicit dependancy of the arguments on resources. (Though I may include arguments from that to illustrate some of the ideas.)

Anyways, the spirit of the idea is "don't be stupid." "Don't AUTOMATICALLY lose when there's a better alternative that doesn't risk you losing even worse."

More to the point, repeated application of that idea is going to let us build up the mathematics of decision theory. My plan right now is for each of the posts in this sequence to be relatively short, discussing and deriving one principle or (a couple of related principles) of decision theory and bayesian probability at a time from the above. The math should be pretty simple, with the very worst being potentially a tiny bit of linear algebra. I expect the nastiest bit of math will be one instance of matrix reduction down the line. Everything else ought to be rather straightforward, showing the mathematics of decision theory to be a matter of, as Mr. Smith would say, "inevitability."

Consider this whole sequence a work in progress. If anyone thinks any partcular bits of it could be rewritted more clearly, please speak up! Or at least type up. (But of course, don't stand up. ;))