Comment author: conchis 06 June 2009 11:05:55PM *  0 points [-]

Just to be clear, you know that an exponential utility function (somewhat misleadingly ) doesn't actually imply that utility is exponential in wealth, right? Bill's claimed utility function doesn't exhibit increasing marginal utility, if that's what you're intuitively objecting to. It's 1-exp(-x), not exp(x).

Many people do find the constant absolute risk aversion implied by exponential utility functions unappealing, and prefer isoelastic utility functions that exhibit constant relative risk aversion, but it does have the advantage of tractability, and may be reasonable over some ranges.

Comment author: bill 06 June 2009 11:53:15PM 0 points [-]

Example of the "unappealingness" of constant absolute risk aversion. Say my u-curve were u(x) = 1-exp(-x/400K) over all ranges. What is my value for a 50-50 shot at 10M?

Answer: around $277K. (Note that it is the same for a 50-50 shot at $100M)

Given the choice, I would certainly choose a 50-50 shot at $10M over $277K. This is why over larger ranges, I don't use an exponential u-curve.

However, it is a good approximation over a range that contains almost all the decisions I have to make. Only for huge decisions to I need to drag out a more complicated u-curve, and they are rare.

Comment author: AndrewKemendo 06 June 2009 10:29:42PM 0 points [-]

It makes sense however you mention that you test it against your intuitions. My first reaction would be to say that this is introducing a biased variable which is not based on a reasonable calculation.

That may not be the case as you may have done so many complicated calculations such that your unconscious "intuitions" may give your conscious the right answer. However from the millionaires biographies I have read and rich people I have talked to a better representation of money and utility according to them is logarithmic rather than exponential. This would indicate to me that the relationship between utility and money would be counter-intuitive for those who have not experienced those levels which are being compared.

I have not had the fortune to experience anything more than a 5 figure income so I cannot reasonably say how my preferences would be modeled. I can reasonably believe that I would be better off at 500K than 50K through simple comparison of lifestyle between myself and a millionaire. I cannot make an accurate enough estimation of my utility and as a result I would not be prepared to make a estimation of what model would best represent it because the probability of that being accurate is likely the same as coin flipping.

Ed: I had a much better written post but an errant click lost the whole thing - time didn't allow the repetition of the better post.

Comment author: bill 06 June 2009 11:41:47PM 0 points [-]

As I said in my original post, for larger ranges, I like logarithmic-type u-curves better than exponential, esp. for gains. The problem with e.g. u(x)=ln(x) where x is your total wealth is that you must be indifferent between your current wealth and a 50-50 shot of doubling vs. halving your wealth. I don't like that deal, so I must not have that curve.

Note that a logarithmic curve can be approximated by a straight line for some small range around your current wealth. It can also be approximated by an exponential for a larger range. So even if I were purely logarithmic, I would still act risk neutral for small deals and would act exponential for somewhat larger deals. Only for very large deals indeed would you be able to identify that I was really logarithmic.

Comment author: AndrewKemendo 06 June 2009 06:40:44PM 1 point [-]

How have you come to these conclusions?

For example:

The reason is that, for me, changing my wealth by a relatively small amount won't radically change my risk preference, and that implies an exponential curve

Is that because there have been points in time when you have made 200K and 400K respectively and found that your preferences didn't change much. Or is that simply expected utility?

Comment author: bill 06 June 2009 07:13:45PM 0 points [-]

For the specific quote: I know that, for a small enough change in wealth, I don't need to re-evaluate all the deals I own. They all remain pretty much the same. For example, if you told me a had $100 more in my bank account, I would be happy, but it wouldn't significantly change any of my decisions involving risk. For a utility curve over money, you can prove that that implies an exponential curve. Intuitively, some range of my utility curve can be approximated by an exponential curve.

Now that I know it is exponential over some range, I needed to figure out which exponential and over what range does it apply. I assessed for myself that I am indifferent between having and not having a deal with a 50-50 chance of winning $400K and losing $200K. The way I thought about that was how I thought about decisions around job hunting and whether I should take or not take job offers that had different salaries.

If that is true, you can combine it with the above and show that the exponential curve should look like u(x) = 1 - exp(-x/400K). Testing it against my intuitions, I find it an an okay approximation between $400K and minus $200K. Outside that range, I need better approximations (e.g. if you try it out on a 50-50 shot of $10M, it gives ridiculous answers).

Does this make sense?

Comment author: bill 04 June 2009 05:58:44AM *  5 points [-]

Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).

  1. Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there isn't much practical difference between a curve and a straight line approximating that curve. Over the range -$10K and +$20K I am risk neutral.

  2. Over a larger range, my utility curve is approximately exponential. For me, between -$200K and +$400K, my utility curve is fairly close to u(x) = 1 - exp (-x/400K). The reason is that, for me, changing my wealth by a relatively small amount won't radically change my risk preference, and that implies an exponential curve. Give me $1M and my risk preferences might change, but within the above range, I pretty much would make the same decisions.

  3. Outside that range, it gets more complicated than I think I should go into here. In short, I am close to logarithmic for gains and exponential for losses, with many caveats and concerns (e.g. avoiding the zero illusion. My utility curve should not have any sort of "inflection point" around my current wealth; there's nothing special about that particular wealth level).

(1) and (2) can be summarized with one number, my risk tolerance of $400K. One way to assess this for yourself is to ask "Would I like a deal with a 50-50 shot at winning $X versus losing $X/2?" The X that makes you indifferent between having the deal and not having the deal is approximately your risk tolerance. I recommend acting risk neutral for deals between $X/20 and minus $X/40, and use an exponential utility function between $X and minus $X/2. If the numbers get too large, thinking about them in dollars per year instead of total dollars sometimes helps. For example, $400K seems large, but $20K per year forever may be easier to think about.

Long, incomplete answer, but I hope it helps.

Comment author: scientism 29 April 2009 03:15:19AM *  48 points [-]

Maybe I'm just cynical but I think people vastly overestimate their own goodness. Often "goodness" is just a way to dress up powerlessness. Like an overweight man might say he's "stocky" or an overweight woman might say she's "curvy," so an undesirable or shy man or woman might emphasize the upside: "I would never cheat." There's a version of the typical mind fallacy in there: a person might genuinely think they would never cheat but be extrapolating from a position where the opportunity rarely presents itself. We can all talk about how, if we were in a position of political power, we'd never succumb to bribes or cronyism because we don't have any political power. It both makes us look good and, as far as we know, it's true. I think testimony, especially when it comes to ones moral worth, is the least valuable form of data available.

Comment author: bill 29 April 2009 03:11:58PM 28 points [-]

When I've taught ethics in the past, we always discuss the Nazi era. Not because the Nazis acted unethically, but because of how everyone else acted.

For example, we read about the vans that carried Jewish prisoners that had the exhaust system designed to empty into the van. The point is not how awful that is, but that there must have been an engineer somewhere who figured out the best way to design and build such a thing. And that engineer wasn't a Nazi soldier, he or she was probably no different from anyone else at that time, with kids and a family and friends and so on. Not an evil scientist in a lab, but just a design engineer in a corporation.

One point of the discussion is that "normal" people have acted quite unethically in the past, and how can we prevent that happening to us.

Comment author: bill 29 April 2009 03:20:47AM *  48 points [-]

Interesting illustration of mental imagery (from Dennett):

Picture a 3 by 3 grid. Then picture the words "gas", "oil", and "dry" spelled downwards in the columns left to right in that order. Looking at the picture in your mind, read the words across on the grid.

I can figure out what the words are of course, but it is very hard for me to read them off the grid. I should be able to if I could actually picture it. It was fascinating for me to think that this isn't true for everyone.

Comment author: Alicorn 10 April 2009 02:27:51AM 5 points [-]

In my (admittedly not immense) experience, intelligent theists who commit to rationality (and stay theists indefinitely) either engage in some heavy-duty partitioning of their beliefs - that is, they commit only partially to rationality, and consider part of their belief network exempt - or cover the gaps in their communicable rationality with incommunicable religious experience. In the first case, it's a clear case of not being wholly rational; if we can talk about those people as a convenient, accessible example of not-wholly-rational individuals with an obvious area of non-rationality, and happen not to severely offend anyone here, there seems no harm.

The latter case, however, makes me nervous, perhaps because I have a lot of Mormon friends and they seem to have a lot of incommunicable religious experiences as a group. From talking to my smart, generally rational Mormon friends - at least those of them who will let me interrogate them about this sort of thing - I find that they act and speak exactly like they're applying rational principles to experiences that they have had, which I just have not happened to have.

Since theists include both the partitioners and the experiencers (and probably some overlap and some categories I haven't thought of or met), perhaps we should stop talking about theists in general as our target group and start speaking of some narrower collection of people, if we want to stay with the example of religion for whatever reason. "Fundamentalists", perhaps - anyone who has met an intelligent, rational, non-partitioning fundamentalist will surprise me, but is of course welcome to shoot down this suggestion.

Comment author: bill 10 April 2009 04:20:37PM 3 points [-]

Intelligent theists who commit to rationality also seem to say that their "revelatory experience" is less robust than scientific, historical, or logical knowledge/experience.

For example, if they interpret their revelation to say that God created all animal species separately, then scientific evidence proves beyond reasonable doubt that that is untrue, then they must have misinterpreted their revelatory experience (I believe this is the Catholic Church's current position, for example). Similarly if their interpretation of their revelation contradicts logical arguments; logic wins over revelation.

This seems consistent with the idea that they have had a strange experience that they are trying to incorporate into their other experience.

For me personally, I have a hard time imagining a private experience that would convince me that God has revealed something to me. I would think it far more likely that I had simply gone temporarily crazy (or at least as crazy as other people who have had other, contradictory revelations). So I don't think that such "experiences" should update my state of information, and I don't update based on others' claims of those experiences either.

Comment author: bill 10 April 2009 03:45:35AM *  0 points [-]

I am struggling with the general point, but I think in some situations it is clear that one is in a "bad" state and needs improvement. Here is an example (similar to Chris Argyris's XY case).

A: "I don't think I'm being effective. How can I be of more help to X?"

B: "Well, just stop being so negative and pointing out others' faults. That just doesn't work and tends to make you look bad."

Here, B is giving advice on how to act, while at the same time acting contrary to that advice. The values B wants to follow are clearly not the values he is actually following; furthermore, B doesn't realize that this is happening (or he wouldn't act that way).

This seems to be a state that is clearly "bad", and shouldn't be seen as just different. If I am demonstrably and obliviously acting against my values as I would express them at the time, then I clearly need help. Note that this is different from saying that I am acting against some set of values I would consider good if I were in a different/better state of mind. The values I am unknowingly transgressing are the ones I think I'm currently trying to fulfill.

Does this make sense? What are your reactions?

By the way, this is a common situation; people feeling stress, threat, or embarrassment often start acting in this way.

Comment author: AnnaSalamon 09 April 2009 08:15:37PM 0 points [-]

He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on.

I doubt this is what Roko means. Probabilities are "in the mind"; they're our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn't make sense to talk about our best-guess probabilities being (externally) "accurate" or "inaccurate". We can just make the best estimates we can make.

What can it mean for probabilities to "not be accurate enough to base decisions on"? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal's wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?

Comment author: bill 09 April 2009 08:32:55PM *  1 point [-]

When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.

In nuclear safety, I hear, they use a measure called "nanomelts" or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.

In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)

Comment author: MrHen 09 April 2009 06:16:26PM 1 point [-]

That being said, if someone sat me down and offered me the life of my dreams if I rolled 4 sixes but would shoot me if I failed, I would pass. Expected payout be darned, I want to live.

I think my thinking about cryonics does boil down to this: living now is of significantly more value (to me) than potentially living more or better later.

A more concrete example, if a being showed up and told me I had 10 years left to live but had the option of "dying" now, being reanimated 10 years later, and then living for 11 years instead, I would probably still pass. I have no idea when the extra time would tip the scales, but even with 100% confidence it is more than "any".

Comment author: bill 09 April 2009 08:22:40PM 2 points [-]

I've used that as a numerical answer to the question "How are you doing today?"

A: Perfect life (health and wealth) B: Instant painless death C: Current life.

What probability p of A (and 1-p of B) makes you indifferent between that deal (p of A, 1-p of B) and C? That probability p, represents an answer to the question "How are you doing?"

Almost nothing that happens to me changes that probability by much, so I've learned not to sweat most ups and downs in life. Things that change that probability (disabling injury or other tragedy) are what to worry about.

View more: Prev | Next