bill
Message
296
27
I read somewhere that the reason we don't see these people is that they all immediately go to Vegas, where they can easily acquire as many positive value deals as they want.
Here is a simple way to assess your value-of-life (from an article by Howard).
Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.
However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, ...
If it helps, I think this is an example of a problem where they give different answers to the same problem. From Jaynes; see http://bayes.wustl.edu/etj/articles/confidence.pdf , page 22 for the details, and please let me know if I've erred or misinterpreted the example.
Three identical components. You run them through a reliability test and they fail at times 12, 14, and 16 hours. You know that these components fail in a particular way: they last at least X hours, then have a lifetime that you assess as an exponential distribution with an average of 1 hour....
Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don't like that deal.
A similar but different method is calculating your "perfect life probability" (from Howard).
Let A be a "perfect" life in terms of health and wealth. Say $2M per year, living to 120 years and being a perfectly healthy 120 year old when you instantly and painlessly die.
Let B be your current life.
Let C be instant, painless death right now.
What probability of A versus C makes you indifferent between that deal and B for sure? That is your "perfect life probability" or "PLP." This is a numerical answer to the question &quo...
Some students started putting zeros on the first assignment or two. However, all they needed was to see a few people get nailed putting 0.001 on the right answer (usually on the famous boy-girl probability problem) and people tended to start spreading their probability assignments. Some people never learn, though, so once in a while people would fail. I can only remember three in eight years.
My professor ran a professional course like this. One year, one of the attendees put 100% on every question on every assignment, and got every single answer correct. ...
I've given those kinds of tests in my decision analysis and my probabilistic analysis courses (for the multiple choice questions). Four choices, logarithmic scoring rule, 100% on the correct answer gives 1 point, 25% on the correct answer gives zero points, and 0% on the correct answer gives negative infinity.
Some students loved it. Some hated it. Many hated it until they realized that e.g. they didn't need 90% of the points to get an A (I was generous on the points-to-grades part of grading).
I did have to be careful; minus infinity meant that on one quest...
When I teach decision analysis, I don't use the word "utility" for exactly this reason. I separate the "value model" from the "u-curve."
The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.
You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribu...
If you wanted to, we could assess at least a part of your u-curve. That might show you why it isn't an impossibility, and show what it means to test it by intuitions.
Would you, right now, accept a deal with a 50-50 chance of winning $100 versus losing $50?
If you answer yes, then we know something about your u-curve. For example, over a range at least as large as (100, -50), it can be approximated by an exponential curve with a risk tolerance parameter of greater than 100 (if it were less that 100, then you wouldn't accept the above deal).
Here, I have asse...
Example of the "unappealingness" of constant absolute risk aversion. Say my u-curve were u(x) = 1-exp(-x/400K) over all ranges. What is my value for a 50-50 shot at 10M?
Answer: around $277K. (Note that it is the same for a 50-50 shot at $100M)
Given the choice, I would certainly choose a 50-50 shot at $10M over $277K. This is why over larger ranges, I don't use an exponential u-curve.
However, it is a good approximation over a range that contains almost all the decisions I have to make. Only for huge decisions to I need to drag out a more complicated u-curve, and they are rare.
As I said in my original post, for larger ranges, I like logarithmic-type u-curves better than exponential, esp. for gains. The problem with e.g. u(x)=ln(x) where x is your total wealth is that you must be indifferent between your current wealth and a 50-50 shot of doubling vs. halving your wealth. I don't like that deal, so I must not have that curve.
Note that a logarithmic curve can be approximated by a straight line for some small range around your current wealth. It can also be approximated by an exponential for a larger range. So even if I were purely...
For the specific quote: I know that, for a small enough change in wealth, I don't need to re-evaluate all the deals I own. They all remain pretty much the same. For example, if you told me a had $100 more in my bank account, I would be happy, but it wouldn't significantly change any of my decisions involving risk. For a utility curve over money, you can prove that that implies an exponential curve. Intuitively, some range of my utility curve can be approximated by an exponential curve.
Now that I know it is exponential over some range, I needed to figure o...
Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).
Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there is
When I've taught ethics in the past, we always discuss the Nazi era. Not because the Nazis acted unethically, but because of how everyone else acted.
For example, we read about the vans that carried Jewish prisoners that had the exhaust system designed to empty into the van. The point is not how awful that is, but that there must have been an engineer somewhere who figured out the best way to design and build such a thing. And that engineer wasn't a Nazi soldier, he or she was probably no different from anyone else at that time, with kids and a family and f...
Interesting illustration of mental imagery (from Dennett):
Picture a 3 by 3 grid. Then picture the words "gas", "oil", and "dry" spelled downwards in the columns left to right in that order. Looking at the picture in your mind, read the words across on the grid.
I can figure out what the words are of course, but it is very hard for me to read them off the grid. I should be able to if I could actually picture it. It was fascinating for me to think that this isn't true for everyone.
Intelligent theists who commit to rationality also seem to say that their "revelatory experience" is less robust than scientific, historical, or logical knowledge/experience.
For example, if they interpret their revelation to say that God created all animal species separately, then scientific evidence proves beyond reasonable doubt that that is untrue, then they must have misinterpreted their revelatory experience (I believe this is the Catholic Church's current position, for example). Similarly if their interpretation of their revelation contradi...
I am struggling with the general point, but I think in some situations it is clear that one is in a "bad" state and needs improvement. Here is an example (similar to Chris Argyris's XY case).
A: "I don't think I'm being effective. How can I be of more help to X?"
B: "Well, just stop being so negative and pointing out others' faults. That just doesn't work and tends to make you look bad."
Here, B is giving advice on how to act, while at the same time acting contrary to that advice. The values B wants to follow are clearly not the ...
When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.
In nuclear safety, I hear, they use a measure called "nanomelts" or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.
In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)
I've used that as a numerical answer to the question "How are you doing today?"
A: Perfect life (health and wealth) B: Instant painless death C: Current life.
What probability p of A (and 1-p of B) makes you indifferent between that deal (p of A, 1-p of B) and C? That probability p, represents an answer to the question "How are you doing?"
Almost nothing that happens to me changes that probability by much, so I've learned not to sweat most ups and downs in life. Things that change that probability (disabling injury or other tragedy) are what to worry about.
From Spetzler and Stael von Holstein (1975), there is a variation of Bet On It that doesn't require risk neutrality.
Say we are going to flip a thumbtack, and it can land heads (so you can see the head of the tack), or tails (so that the point sticks up like a tail). If we want to assess your probability of heads, we can construct two deals.
Deal 1: You win $10,000 if we flip a thumbtack and it comes up heads ($0 otherwise, you won't lose anything). Deal 2: You win $10,000 if we spin a roulette-like wheel labeled with numbers 1,2,3, ..., 100, and the wheel c... (read more)