Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
bill240

From Spetzler and Stael von Holstein (1975), there is a variation of Bet On It that doesn't require risk neutrality.

Say we are going to flip a thumbtack, and it can land heads (so you can see the head of the tack), or tails (so that the point sticks up like a tail). If we want to assess your probability of heads, we can construct two deals.

Deal 1: You win $10,000 if we flip a thumbtack and it comes up heads ($0 otherwise, you won't lose anything). Deal 2: You win $10,000 if we spin a roulette-like wheel labeled with numbers 1,2,3, ..., 100, and the wheel comes up between 1 and 50. ($0 otherwise, you won't lose anything).

Which deal would you prefer? If you prefer deal 1, then you are assessing a probability of heads greater than 50%; otherwise, you are assessing a probability of heads less than 50%.

Then, ask the question many times, using a different number than 50 for deal 2. For example, if you first say you would prefer deal 2, then change it to winning on 1-25 instead, and see if you still prefer deal 2. Keep adjusting until you are indifferent between deal 1 and 2. If you are indifferent between the two deals when deal 2 wins from 1-37, then you have assessed a probability of 37%.

The above describes one procedure used by professional decision analysts; they usually use a physical wheel with a "winning area" that is adjustable continuously rather than using numbers like the above.

bill40

I read somewhere that the reason we don't see these people is that they all immediately go to Vegas, where they can easily acquire as many positive value deals as they want.

bill100

Here is a simple way to assess your value-of-life (from an article by Howard).

Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.

However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, I’ll find out the cost, and if the cost is less than you are willing to pay, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that dollar amount X. For example, you might decide that you wouldn’t pay more than $50,000.

Now scratch the above paragraph; actually the treatment is free. However, it isn’t perfectly effective. It always cures the disease, but there is a small chance that it will kill you. “What is the chance?” you ask. “I forgot,” says the doctor. “So, you write down the largest risk of death you are willing to take, I’ll find out the risk, and if the risk is less than you are willing to take, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that probability Y. For example, you might decide that you aren’t willing to take more than a half-percent chance of death to avoid the pain.

Now you’ve established that Pain = $X loss of dollars, and that Pain = Y probability of death. Transitivity implies that $X loss of dollars = Y probability of death. Divide X by Y and you have your value-of-life. Above, $50K/0.5% = $10M value-of-life.

If you want, you can divide by one million and get a dollar cost for a one-in-a-million chance of death (called a micromort). For example, my micromort value is $12 for small risks (larger risks are of course different; you can’t kill me for $12M). I use this value to make health and safety decisions.

bill60

If it helps, I think this is an example of a problem where they give different answers to the same problem. From Jaynes; see http://bayes.wustl.edu/etj/articles/confidence.pdf , page 22 for the details, and please let me know if I've erred or misinterpreted the example.

Three identical components. You run them through a reliability test and they fail at times 12, 14, and 16 hours. You know that these components fail in a particular way: they last at least X hours, then have a lifetime that you assess as an exponential distribution with an average of 1 hour. What is the shortest 90% confidence interval / probability interval for X, the time of guaranteed safe operation?

Frequentist 90% confidence interval: 12.1 hours - 13.8 hours

Bayesian 90% probability interval: 11.2 hours - 12.0 hours

Note: the frequentist interval has the strange property that we know for sure that the 90% confidence interval does not contain X (from the data we know that X <= 12). The Bayesian interval seems to match our common sense better.

bill70

Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don't like that deal.

bill50

A similar but different method is calculating your "perfect life probability" (from Howard).

Let A be a "perfect" life in terms of health and wealth. Say $2M per year, living to 120 years and being a perfectly healthy 120 year old when you instantly and painlessly die.

Let B be your current life.

Let C be instant, painless death right now.

What probability of A versus C makes you indifferent between that deal and B for sure? That is your "perfect life probability" or "PLP." This is a numerical answer to the question "How are you doing today?" For example, mine is 93% right now, as I would be indifferent between B for sure and a deal with a 93% chance of A and 7% chance of C.

Note that almost anything that happens to you on any particular day would not change your PLP that much. Specifically, adding a small risk to your life certainly won't make that much of a difference.

(I'm not sure how immortality or other extreme versions of "perfect health" would change this story.).

bill170

Some students started putting zeros on the first assignment or two. However, all they needed was to see a few people get nailed putting 0.001 on the right answer (usually on the famous boy-girl probability problem) and people tended to start spreading their probability assignments. Some people never learn, though, so once in a while people would fail. I can only remember three in eight years.

My professor ran a professional course like this. One year, one of the attendees put 100% on every question on every assignment, and got every single answer correct. The next year, someone attended from the same company, and decided he was going to do the same thing. Quite early, he got minus infinity. My professor's response? "They both should be fired."

bill320

I've given those kinds of tests in my decision analysis and my probabilistic analysis courses (for the multiple choice questions). Four choices, logarithmic scoring rule, 100% on the correct answer gives 1 point, 25% on the correct answer gives zero points, and 0% on the correct answer gives negative infinity.

Some students loved it. Some hated it. Many hated it until they realized that e.g. they didn't need 90% of the points to get an A (I was generous on the points-to-grades part of grading).

I did have to be careful; minus infinity meant that on one question you could fail the class. I did have to be sure that it wasn't a mistake, that they actually meant to put a zero on the correct answer.

If you want to try, you might want to try the Brier scoring rule instead of the logarithmic; it has a similar flavor without the minus infinity hassle.

bill10

When I teach decision analysis, I don't use the word "utility" for exactly this reason. I separate the "value model" from the "u-curve."

The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.

You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribution on NPV, and the u-curve would determine how to assign a value that represents the distribution. Some companies are more risk averse than others, so they would value the same distribution on NPV differently.

Without a u-curve, you can't make decisions under uncertainty. If all you have is a value model, then you can't decide e.g. if you would like a deal with a 50-50 shot at winning $100 vs losing $50. That depends on risk aversion, which is encoded into a u-curve, not a value model.

Does this make sense?

bill10

If you wanted to, we could assess at least a part of your u-curve. That might show you why it isn't an impossibility, and show what it means to test it by intuitions.

Would you, right now, accept a deal with a 50-50 chance of winning $100 versus losing $50?

If you answer yes, then we know something about your u-curve. For example, over a range at least as large as (100, -50), it can be approximated by an exponential curve with a risk tolerance parameter of greater than 100 (if it were less that 100, then you wouldn't accept the above deal).

Here, I have assessed something about your u-curve by asking you a question that it seems fairly easy to answer. That's all I mean by "testing against intuitions." By asking a series of similar questions I can assess your u-curve over whatever range you would like.

You also might want to do calculations: for example, $10K per year forever is worth around $300K or so. Thinking about losing or gaining $10K per year for the rest of your life might be easier than thinking about gaining or losing $200-300K.

Load More