In the comments to this post, several people independently stated that being risk-averse is the same as having a concave utility function. There is, however, a subtle difference here. Consider the example proposed by one of the commenters: an agent with a utility function

u = sqrt(p) utilons for p paperclips.

The agent is being offered a choice between making a bet with a 50/50 chance of receiving a payoff of 9 or 25 paperclips, or simply receiving 16.5 paperclips. The expected payoff of the bet is a full 9/2 + 25/2 = 17 paperclips, yet its expected utility is only 3/2 + 5/2 = 4 = sqrt(16) utilons which is less than the sqrt(16.5) utilons for the guaranteed deal, so our agent goes for the latter, losing 0.5 expected paperclips in the process. Thus, it is claimed that our agent is risk averse in that it sacrifices 0.5 expected paperclips to get a guaranteed payoff.

Is this a good model for the cognitive bias of risk aversion? I would argue that it's not. Our agent ultimately cares about utilons, not paperclips, and in the current case it does perfectly fine at rationally maximizing expected utilons. A cognitive bias should be, instead, some irrational behavior pattern that can be exploited to take utility (rather than paperclips) away from the agent. Consider now another agent, with the same utility function as before, but who just has this small additional trait that it would strictly prefer a sure payoff of 16 paperclips to the above bet. Given our agent's utility function, 16 is the point of indifference, so could there be any problem with his behavior? Turns out there is. For example, we could follow the post on Savage's theorem (see Postulate #4). If the sure payoff of

16 paperclips = 4 utilons

is strictly preferred to the bet

{P(9 paperclips) = 0.5; P(25 paperclips) = 0.5} = 4 utilons,

then there must also exist some finite δ > 0 such that the agent must strictly prefer a guaranteed 4 utilons to betting on

{P(9) = 0.5 - δ; P(25) = 0.5 + δ) = 4 + 2δ utilons

- all at the loss of 2δ expected utilons! This is also equivalent to our agent being willing to pay a finite amount of paperclips to substitute the bet with the sure deal of the same expected utility.

What we have just seen falls pretty nicely within the concept of a bias. Our agent has a perfectly fine utility function, but it also has this other thing - let's name it "risk aversion" - that makes the agent's behavior fall short of being perfectly rational, and is independent of its concave utility function for paperclips. (Note that our agent has linear utility for utilons, but is still willing to pay some amount of those to achieve certainty) Can we somehow fix our agent? Let's see if we can redefine our utility function u'(p) in some way so that it gives us a consistent preference of

guaranteed 16 paperclips

over the

 {P(9) = 0.5; P(25) = 0.5}

bet, but we would also like to request that the agent would still strictly prefer the bet

{P(9 + δ) = 0.5; P(25 + δ) = 0.5}

to {P(16) = 1} for some finite δ > 0, so that our agent is not infinitely risk-averse. Can we say anything about this situation? Well, if u'(p) is continuous, there must also exist some number δ' such that 0 < δ' < δ and our agent will be indifferent between {P(16) = 1} and

{P(9 + δ') = 0.5; P(25 + δ') = 0.5}.

And, of course, being risk-averse (in the above-defined sense), our supposedly rational agent will prefer - no harm done - the guaranteed payoff to the bet of the same expected utility u'... Sounds familiar, doesn't it?

I would like to stress again that, although our first agent does have a concave utility function for paperclips, which causes it to reject bets with some expected payoff of paperclips to guaranteed payoffs of less paperclips, it still maximizes its expected utilons, for which it has linear utility. Our second agent, however, has this extra property that causes it to sacrifice expected utilons to achieve certainty. And it turns out that with this property it is impossible to define a well-behaved utility function! Therefore it seems natural to distinguish being rational with a concave utility function, on the one hand, from, on the other hand, being risk-averse and not being able to have a well-behaved utility function at all. The latter case seems much more subtle at the first sight, but causes a more fundamental kind of problem. Which is why I feel that a clear, even if minor, distinction between the two situations is still worth making explicit.

A rational agent can have a concave utility function. A risk-averse agent can not be rational.

(Of course, even in the first case the question of whether we want a concave utility function is still open.)

New to LessWrong?

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 8:12 PM

We ought to have two different terms for ‘concavity of utility function’ and ‘Allais-paradox-like behaviour’; having risk-adverse meaning both is too likely to lead to confusion.

Concavity of utility function = diminishing marginal utility.

Edit: That should probably be convexity, but you should also have said convexity.

(I usually specify whether I mean concave upwards or concave downwards because I can never remember the standard meaning of convave by itself...)

Is your agent a human being (or some other animal, as opposed to some artificial creature that has been created specifically to be rational? If it is, then you should distinguish between two different utilities of the same lottery when the drawing is in the future:

1) The expected utility after the drawing

2) The utility (actual, not expected) of having the drawing in your future

The second is influenced by the first, but also by the emotions and any other experiences that are caused by beliefs about the lottery. This post deals very well with the first, but ignores the second.

[-][anonymous]12y30

Man, I chose risk aversion as an example I thought would be uncontroversially accepted as a bias. Oh well...

Man, I chose risk aversion as an example I thought would be uncontroversially accepted as a bias. Oh well...

It is uncontroversially a bias away from expected utility maximisation. (I have a post in mind exploring why the 'expected' part of utility maximisation is not obviously a correct state of being for related reasons.)

It is uncontroversially a bias away from expected utility maximisation.

No it's not; risk aversion is a property of utility functions. You're talking about the certainty effect.

No it's not; risk aversion is a property of utility functions. You're talking about the certainty effect.

No I'm not. I'm talking about the same thing Nyan is talking about. That is, risk aversion when it comes to actual utility - which is itself a general bias of humans. He isn't talking about diminishing marginal utility, which is the property of utility functions. Once you start being risk adverse with respect to actual utility you stop being an expected utility maximiser and become a different kind of utility maximiser that isn't obsessed with the mean over the probability distribution.

No I'm not. I'm talking about the same thing Nyan is talking about.

nyan_sandwich mislabeled their discussion, which appears to be the source of much of the controversy. If you want to talk about minimax, talk about minimax, don't use another term that has an established meaning.

That is, risk aversion when it comes to actual utility - which is itself a general bias of humans.

The only general bias I've heard of that's close to this is the certainty effect. If there's another one I haven't heard of, I would greatly appreciate hearing about it.

[-][anonymous]12y20

nyan_sandwich mislabeled their discussion

Sorry guys.

The only general bias I've heard of that's close to this is the certainty effect. If there's another one I haven't heard of, I would greatly appreciate hearing about it.

I don't think it's all the certainty effect. The bias that people seem to have can usually be modeled by a nonlinear utility function, but isn't it still there in cases where it's understood that utility is linear (lives saved, charity dollars, etc)?

but isn't it still there in cases where it's understood that utility is linear (lives saved, charity dollars, etc)?

Why would those be linear? (i.e. who understands that?)

Utility functions are descriptive; they map from expected outcomes to actions. You measure them by determining what actions people take in particular situations.

Consider scope insensitivity. It doesn't make sense if you measure utility as linear in the number of birds- aren't 200,000 birds 100 times more valuable than 2,000 birds? It's certainly 100 times more birds, but that doesn't tell us anything about value. What it tells you is that the action "donate to save birds in response to prompt" provides $80 worth of utility, and the number of birds doesn't look like an input to the function.

And while scope insensitivity reflects a pitfall in human cognition, it's not clear it doesn't serve goals. If the primary benefit for a college freshman to, say, opposing genocide in Darfur is that they signal their compassion, it doesn't really matter what the scale of the genocide in Darfur is. Multiply or divide the number of victims by ten, and they're still going to slap on a "save Darfur" t-shirt, get the positive reaction from that, and then move on with their lives.

Now, you may argue that your utility function should be linear with respect to some feature of reality- but that's like saying your BMI should be 20. It is whatever it is, and will take effort to change. Whether or not it's worth the effort is, again, a question of revealed preferences.

Why would those be linear?

Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.

(i.e. who understands that?)

Most people who have read Eliezer's posts. He has made at least one on this subject.

[-][anonymous]12y20

Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.

That's exactly what I would say, in way fewer words. Well said.

nyan_sandwich mislabeled their discussion, which appears to be the source of much of the controversy. If you want to talk about minimax, talk about minimax, don't use another term that has an established meaning.

In the specific case case of risk aversion he is using the term correctly and your substitution with the meaning behind "diminishing marginal utility" is not a helpful correction, it is an error. Minimax is again related but also not the correct word. (I speak because in Nyan's situation I would be frustrated by being falsely corrected.)

In the specific case case of risk aversion he is using the term correctly

If you could provide examples of this sort of usage in the utility theory literature or textbooks, I will gladly retract my corrections. I don't recall seeing "risk aversion" used this way before.

Minimax is again related but also not the correct word.

nyan_sandwich has edited their post to reflect that minimax was their intention.

If you could provide examples of this sort of usage in the utility theory literature or textbooks, I will gladly retract my corrections. I don't recall seeing "risk aversion" used this way before.

It is just the standard usage if applied appropriately to utility. Even the 'certainty effect' that you mention is an example of being risk adverse with respect to utility albeit one highly limited to a specific subset of cases - again when the object being risk is evaluated in terms of utility.

nyan_sandwich has edited their post to reflect that minimax was their intention.

Which may apply somewhere in the post but in the specific application in the context just wouldn't have made sense in the sentence.

[-][anonymous]12y00

Oh cool, can't wait.

Your claim that a risk-averse agent cannot be rational is trivially true because it is purely circular.

You've defined a risk-averse agent as someone who does not maximize their expected utilons. The meaning of "rational" around these parts is, "maximizes expected utilons." The fact that you took a circuitous route to make this point does not change the fact that it is trivial.

I'll break down that point in case it's non-obvious. Utilons do not exist in the real world - there is no method of measuring utilons. Rather, they are a theoretical construct you are employing. You've defined a rational agent as the one who maximizes the amount of utilons he acquires. You've specified a function as to how he calculates these, but the specifics of that function are immaterial. You've then shown that someone who does not rationally maximize these utilons is not a rational utilon maximizer.

Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.

[-][anonymous]12y00

I'll break down that point in case it's non-obvious. Utilons do not exist in the real world - there is no method of measuring utilons.

(There is no method in the context of this discussion, but figuring out how to "measure utilons" (with respect to humans) is part of the FAI problem. If an agent doesn't maximize utility suggested by that agent's construction (in the same sense as human preference can hopefully be defined based on humans), that would count as a failure of that agent's rationality.)

[This comment is no longer endorsed by its author]Reply

Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.

And yet this was still disputed. Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.

Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.

This is like a dismissive... compliment? I'm not sure how to feel!

Seriously, though, it doesn't undermine my point. This article ultimately gets to the same basic conclusion, but does it in a very roundabout way. The definition of "utilitons," converting outcomes into utilons eliminates risk-aversion. This extensive discussion ultimately makes the point that it's irrational to be utilon risk averse, but it doesn't really hit the bigger point that utilon risk aversion is fundamentally non-sensical. The fact that people don't realize that there's circular reasoning going on is all the more reason to point out that it is happening.

I disagree with your connotations. While the point is obvious and even follows logically from the premises it is not 'circular' in any meaningful sense. People are still getting confused on the issue so explaining it is fine.

I don't mean obvious in the, "Why didn't I think of that?" sense. I mean obvious in the trivial sense. When I say that it is circular, I don't mean simply that the conclusion follows logically from the premises. That is the ultimate virtue of an argument. What I mean is that the conclusion is one of the premises. The definition of a rational person is one who maximizes their expected utility. Therefore, someone who is risk-averse with respect to utility is irrational; our definition of rational guarantees that this be so.

I certainly see why the overall issue leads to confusion and why people don't see the problem instantly - the language is complex, and the concept of "utilons" folds a lot of concepts into itself so that it's easy to lose track of what it really means. I don't think this post really appreciates this issue, and it seems to me to be the deepest problem with this discussion. It reads like it is analyzing an actual problem, rather than unpacking an argument to show how it is circular, and I think the latter is the best description of the actual problem.

In other words, the article makes it easy to walk away without realizing that it is impossible for a rational person to be risk averse towards utility because it contradicts what we mean by "rational person." That seems like the key issue here to me.

I don't mean obvious in the, "Why didn't I think of that?" sense. I mean obvious in the trivial sense. When I say that it is circular, I don't mean simply that the conclusion follows logically from the premises.

And, for the sake of clarity, I have expressed disagreement with this position.

For what it's worth I don't necessarily agree with the post in full - I just don't apply this particular rejection.

Ok, I finally get it: nyan_sandwich and you are using risk aversion in in the common way used to describe why someone is unwilling to risk $50 and/or certainty effects, not in the way standard to economists. If someone takes an irrational action and tries to justify it by citing risk aversion, should we adopt that as the name of the bias or say that was a bad justification?

People do exhibit inconsistent amounts of risk aversion over small and large risks, but calling that "risk aversion" seems misplaced. We know it's inconsistent to be scared to fly and feel fine riding in a car, but we wouldn't call that "bias against death" or a "cautious bias". I feel you are doing something analogous here.

See also: Diminishing marginal utility of wealth cannot explain risk aversion. Which I found in the comment here: http://lesswrong.com/lw/15f/misleading_the_witness/11ad but I think I read in another thread on lesswrong which I can't find at the moment

  1. As for me, one of the main reasons I wouldn't take a bet winning $110 or losing $100 is that I would take the existence of someone willing to offer such a bet as evidence that there's something about the coin to be flipped that they know and I don't; if such a bet was implemented in a way that's very hard for either partner to game (e.g. getting one random bit from random.org with both of us looking at the computer) I'd likely take it, but I don't anticipate being offered such a bet in the foreseeable future.

  2. I think some of the refused bets on the right-hand column of the table on Page 3 of that paper are not as absurd as Rabin thinks -- Eliezer (IIRC) pointed out that there are quite a few people who would choose a 100% chance of receiving $500 to a 10% chance of receiving $1 million. (I'm not sure whether I'd accept some of those bets myself.)

This is not to say that human preferences can always be described by a utility function (see the Allais paradox), but I don't think Rabin's is sufficient evidence that they don't.

As for me, one of the main reasons I wouldn't take a bet winning $110 or losing $100 is that I would take the existence of someone willing to offer such a bet as evidence that there's something about the coin to be flipped that they know and I don't

This seems to follow the no-trade theorem for zero-sum games.

If the sure payoff of

16 paperclips = 4 utilons

is strictly preferred to the bet

{P(9 paperclips) = 0.5; P(25 paperclips) = 0.5} = 4 utilons

Then you have a contradiction in terms, because you shouldn't have a strict preference for outcomes with the same number of utilons.

The sqrt(paperclips) agent should be indifferent between 16 paperclips and {.5: 9; .5: 25} paperclips. It has a strict preference for 16.5 paperclips to either 16 paperclips or {.5: 9; .5: 25} paperclips.

Savage's 4th axiom- the strict preference- says that in order for you to strictly prefer 16.5 paperclips to 16 paperclips, there has to be a difference in the utilon values. There is- 16.5 paperclips represents 4.06 utilons vs. only 4 for 16 paperclips.

By the 4th axiom, we can construct other bets: say, {.5: 9.4; .5; 25.4}. The agent strictly prefers 16.5 paperclips to that deal (which has 4.05 utilons).

Upvoted. In my opinion, the literature on risk-averse agents is logically consistent, and being risk-averse does not imply irrationality. I agree with Vaniver's comments. Also, humans are, on average*, risk averse.

*For example, with respect to markets, 'market clearing' average in a Walrasian auction sense.

Won't you get behavior practically indistinguishable from the 'slightly risk averse agent with sqrt(x) utility function' by simply using x^0.499 as utility function?

Also, by the way. The resulting final utility function for any sort of variable needs not be smooth, monotonously growing, or inexpensive to calculate.

Consider my utility function for food obtained by me right now. Slightly more than is optimal for me to eat before it spoils, in the summer without fridge, would give no extra utility what so ever over the right amount; or results in the dis-utility (more trash). A lot more may make it worth it to invite a friend for dinner and utility starts growing again.

Essentially the utility peaks then starts going down, then at some not very well defined point utility suddenly starts growing again.

There can be all sorts of really odd looking 'irrational' heuristics that work as a better substitute for true utility function which is expensive to calculate (but is known to follow certain broken line pattern), than some practical to compute utility function.

WRT utility of extra money... money themselves are worth nothing, it's the changes to your life you can make with them, that matter. As it is, I would take 10% shot at 10 millions $ over 100 000 for certain; 15 years ago I would take 10 000 for certain over 10% shot at 10 million (of course in the latter case it ought to be possible to partner up with someone who has big capital to get say 800 000 for certain).

Ultimately, attaching utility functions to stuff is like considering a fairly bad chess AI that just sums values of pieces and positional features perhaps. That sort of AI, running on same hardware, is going to lose big time to AIs with more clever board evaluation than that.

Upvoted since it's (to me) a very interesting topic, even if I disagree with your conclusion.

In short my thesis is : taking a risk decreases your knowledge of the world, and therefore your ability to optimize until you know if you won or lost your bet. But explaining it in details grew so much that I made a new article about it.

[-][anonymous]12y00

Your article is so long I haven't read it yet. This summary is enough for me tho.

taking a risk decreases your knowledge of the world, and therefore your ability to optimize until you know if you won or lost your bet.

This is a very good point.

Upvoted. This makes your comment on the other thread much clearer to me, and I appreciate it.

[-][anonymous]12y00

I haven't read all of this post or the one it's a response to, but it looks like one could resolve the confusion here by talking explicitly about either "risk aversion with respect to outcome measures" or "risk aversion with respect to utility itself".

[This comment is no longer endorsed by its author]Reply