Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bill 05 January 2011 01:31:30AM 16 points [-]

From Spetzler and Stael von Holstein (1975), there is a variation of Bet On It that doesn't require risk neutrality.

Say we are going to flip a thumbtack, and it can land heads (so you can see the head of the tack), or tails (so that the point sticks up like a tail). If we want to assess your probability of heads, we can construct two deals.

Deal 1: You win $10,000 if we flip a thumbtack and it comes up heads ($0 otherwise, you won't lose anything). Deal 2: You win $10,000 if we spin a roulette-like wheel labeled with numbers 1,2,3, ..., 100, and the wheel comes up between 1 and 50. ($0 otherwise, you won't lose anything).

Which deal would you prefer? If you prefer deal 1, then you are assessing a probability of heads greater than 50%; otherwise, you are assessing a probability of heads less than 50%.

Then, ask the question many times, using a different number than 50 for deal 2. For example, if you first say you would prefer deal 2, then change it to winning on 1-25 instead, and see if you still prefer deal 2. Keep adjusting until you are indifferent between deal 1 and 2. If you are indifferent between the two deals when deal 2 wins from 1-37, then you have assessed a probability of 37%.

The above describes one procedure used by professional decision analysts; they usually use a physical wheel with a "winning area" that is adjustable continuously rather than using numbers like the above.

Comment author: WrongBot 01 July 2010 09:50:22PM 2 points [-]

Do you have any examples of real economic circumstances under which a sane person (someone who isn't solely concerned with maximizing the number of Porsches they own, e.g.) would have a convex utility/money curve?

(If there is a way to phrase this question so that it seems more curious and less confrontational, please assume that I said that instead.)

Comment author: bill 02 July 2010 03:24:25PM 4 points [-]

I read somewhere that the reason we don't see these people is that they all immediately go to Vegas, where they can easily acquire as many positive value deals as they want.

Comment author: BenAlbahari 20 March 2010 01:08:53PM 2 points [-]

Out of curiosity, how far do you go consciously putting a price on things? Do you actually have a numerical figure you put on your own life? Would you feel comfortable putting a price on a friendship or a fetus? How much money is a point on Less Wrong worth to you?

Comment author: bill 21 March 2010 12:43:30AM 10 points [-]

Here is a simple way to assess your value-of-life (from an article by Howard).

Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.

However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, I’ll find out the cost, and if the cost is less than you are willing to pay, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that dollar amount X. For example, you might decide that you wouldn’t pay more than $50,000.

Now scratch the above paragraph; actually the treatment is free. However, it isn’t perfectly effective. It always cures the disease, but there is a small chance that it will kill you. “What is the chance?” you ask. “I forgot,” says the doctor. “So, you write down the largest risk of death you are willing to take, I’ll find out the risk, and if the risk is less than you are willing to take, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that probability Y. For example, you might decide that you aren’t willing to take more than a half-percent chance of death to avoid the pain.

Now you’ve established that Pain = $X loss of dollars, and that Pain = Y probability of death. Transitivity implies that $X loss of dollars = Y probability of death. Divide X by Y and you have your value-of-life. Above, $50K/0.5% = $10M value-of-life.

If you want, you can divide by one million and get a dollar cost for a one-in-a-million chance of death (called a micromort). For example, my micromort value is $12 for small risks (larger risks are of course different; you can’t kill me for $12M). I use this value to make health and safety decisions.

In response to What is Bayesianism?
Comment author: nazgulnarsil 26 February 2010 12:32:18PM 18 points [-]

is there a simple explanation of the conflict between bayesianism and frequentialism? I have sort of a feel for it from reading background materials but a specific example where they yield different predictions would be awesome. has such already been posted before?

Comment author: bill 28 February 2010 01:25:08AM *  5 points [-]

If it helps, I think this is an example of a problem where they give different answers to the same problem. From Jaynes; see http://bayes.wustl.edu/etj/articles/confidence.pdf , page 22 for the details, and please let me know if I've erred or misinterpreted the example.

Three identical components. You run them through a reliability test and they fail at times 12, 14, and 16 hours. You know that these components fail in a particular way: they last at least X hours, then have a lifetime that you assess as an exponential distribution with an average of 1 hour. What is the shortest 90% confidence interval / probability interval for X, the time of guaranteed safe operation?

Frequentist 90% confidence interval: 12.1 hours - 13.8 hours

Bayesian 90% probability interval: 11.2 hours - 12.0 hours

Note: the frequentist interval has the strange property that we know for sure that the 90% confidence interval does not contain X (from the data we know that X <= 12). The Bayesian interval seems to match our common sense better.

Comment author: Douglas_Knight 21 January 2010 05:58:20AM 6 points [-]

It's a standard result that people actually treat the utility of wealth roughly logarithmically

or is it just a standard assumption? I've never heard anything more precise than declining marginal utility.

Comment author: bill 21 January 2010 02:25:34PM 6 points [-]

Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don't like that deal.

Comment author: bill 14 October 2009 12:53:17AM 5 points [-]

A similar but different method is calculating your "perfect life probability" (from Howard).

Let A be a "perfect" life in terms of health and wealth. Say $2M per year, living to 120 years and being a perfectly healthy 120 year old when you instantly and painlessly die.

Let B be your current life.

Let C be instant, painless death right now.

What probability of A versus C makes you indifferent between that deal and B for sure? That is your "perfect life probability" or "PLP." This is a numerical answer to the question "How are you doing today?" For example, mine is 93% right now, as I would be indifferent between B for sure and a deal with a 93% chance of A and 7% chance of C.

Note that almost anything that happens to you on any particular day would not change your PLP that much. Specifically, adding a small risk to your life certainly won't make that much of a difference.

(I'm not sure how immortality or other extreme versions of "perfect health" would change this story.).

In response to comment by bill on Shut Up And Guess
Comment author: Eliezer_Yudkowsky 21 July 2009 06:51:09PM 11 points [-]

minus infinity meant that on one question you could fail the class

...wow. Well, I guess that's one way to teach people to avoid infinite certainty. Reminiscent of Jeffreyssai. Did that happen to a lot of students?

Comment author: bill 22 July 2009 09:22:43PM 11 points [-]

Some students started putting zeros on the first assignment or two. However, all they needed was to see a few people get nailed putting 0.001 on the right answer (usually on the famous boy-girl probability problem) and people tended to start spreading their probability assignments. Some people never learn, though, so once in a while people would fail. I can only remember three in eight years.

My professor ran a professional course like this. One year, one of the attendees put 100% on every question on every assignment, and got every single answer correct. The next year, someone attended from the same company, and decided he was going to do the same thing. Quite early, he got minus infinity. My professor's response? "They both should be fired."

In response to comment by dclayh on Shut Up And Guess
Comment author: SoullessAutomaton 21 July 2009 11:15:28AM 13 points [-]

I vaguely recall reading an anecdote about a similar testing scheme where you had to give an actual numerical confidence value for each answer. Saying you were 100% confident of an answer that was wrong would give you minus infinity points.

I bet that would be even less popular with students.

Comment author: bill 21 July 2009 02:48:57PM *  20 points [-]

I've given those kinds of tests in my decision analysis and my probabilistic analysis courses (for the multiple choice questions). Four choices, logarithmic scoring rule, 100% on the correct answer gives 1 point, 25% on the correct answer gives zero points, and 0% on the correct answer gives negative infinity.

Some students loved it. Some hated it. Many hated it until they realized that e.g. they didn't need 90% of the points to get an A (I was generous on the points-to-grades part of grading).

I did have to be careful; minus infinity meant that on one question you could fail the class. I did have to be sure that it wasn't a mistake, that they actually meant to put a zero on the correct answer.

If you want to try, you might want to try the Brier scoring rule instead of the logarithmic; it has a similar flavor without the minus infinity hassle.

Comment author: conchis 04 June 2009 11:13:31AM *  2 points [-]

What counts as a "successful" utility function?

In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.

  1. The first assumes that "utility" corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say "I think x will give me 2 emotilons", and "I think y will give me 3 emotilons"; you test this by giving yourself x, and y; and success is if the results seem to match up.

  2. The second assumes that we already have a set of preferences, and "utility" is just a number we use to represent these, such that xPy <=> u(x)>u(y), where xPy means "x is preferred to y". (More generally, when x and y may be gambles, we want: xPy <=> E[u(x)]>E[u(y)]).

It's less clear what the point of specifying a utility function is supposed to be in the second case. Once you have preferences, specifying the utility function has no additional information content: it's just a way of representing them with a real number. I guess "success" in this case simply consists in coming up with a utility function at all: if your preferences are inconsistent (e.g. incomplete, intransitive, ...) then you won't be able to do it, so being able to do it is a good sign.

Much of the discussion about utility functions on this site seems to me to conflate these two distinct senses of "utility", with the result that it's often difficult to tell what people really mean.

Comment author: bill 07 June 2009 04:24:55PM 1 point [-]

When I teach decision analysis, I don't use the word "utility" for exactly this reason. I separate the "value model" from the "u-curve."

The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.

You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribution on NPV, and the u-curve would determine how to assign a value that represents the distribution. Some companies are more risk averse than others, so they would value the same distribution on NPV differently.

Without a u-curve, you can't make decisions under uncertainty. If all you have is a value model, then you can't decide e.g. if you would like a deal with a 50-50 shot at winning $100 vs losing $50. That depends on risk aversion, which is encoded into a u-curve, not a value model.

Does this make sense?

Comment author: AndrewKemendo 07 June 2009 04:27:58AM 0 points [-]

Unfortunately the better parts of my post were lost - or rather more of the main point.

I posit that the utility valuation is an impossibility currently. I was not really challenging whether your function was exponential or logarithmic - but questioning how you came to the conclusion; how you decide, for instance where exactly the function changes especially having not experienced the second state. The "logarithmic" point I was making was designed to demonstrate that true utility may differ significantly from expected utility once you are actually at point 2 and thus may not be truly representative.

Mainly I am curious as to what value you place on "intuition" and why.

Testing it against my intuitions

Comment author: bill 07 June 2009 04:12:29PM 1 point [-]

If you wanted to, we could assess at least a part of your u-curve. That might show you why it isn't an impossibility, and show what it means to test it by intuitions.

Would you, right now, accept a deal with a 50-50 chance of winning $100 versus losing $50?

If you answer yes, then we know something about your u-curve. For example, over a range at least as large as (100, -50), it can be approximated by an exponential curve with a risk tolerance parameter of greater than 100 (if it were less that 100, then you wouldn't accept the above deal).

Here, I have assessed something about your u-curve by asking you a question that it seems fairly easy to answer. That's all I mean by "testing against intuitions." By asking a series of similar questions I can assess your u-curve over whatever range you would like.

You also might want to do calculations: for example, $10K per year forever is worth around $300K or so. Thinking about losing or gaining $10K per year for the rest of your life might be easier than thinking about gaining or losing $200-300K.

View more: Next