In the comments to this post, several people independently stated that being risk-averse is the same as having a concave utility function. There is, however, a subtle difference here. Consider the example proposed by one of the commenters: an agent with a utility function
u = sqrt(p) utilons for p paperclips.
The agent is being offered a choice between making a bet with a 50/50 chance of receiving a payoff of 9 or 25 paperclips, or simply receiving 16.5 paperclips. The expected payoff of the bet is a full 9/2 + 25/2 = 17 paperclips, yet its expected utility is only 3/2 + 5/2 = 4 = sqrt(16) utilons which is less than the sqrt(16.5) utilons for the guaranteed deal, so our agent goes for the latter, losing 0.5 expected paperclips in the process. Thus, it is claimed that our agent is risk averse in that it sacrifices 0.5 expected paperclips to get a guaranteed payoff.
Is this a good model for the cognitive bias of risk aversion? I would argue that it's not. Our agent ultimately cares about utilons, not paperclips, and in the current case it does perfectly fine at rationally maximizing expected utilons. A cognitive bias should be, instead, some irrational behavior pattern that can be exploited to take utility (rather than paperclips) away from the agent. Consider now another agent, with the same utility function as before, but who just has this small additional trait that it would strictly prefer a sure payoff of 16 paperclips to the above bet. Given our agent's utility function, 16 is the point of indifference, so could there be any problem with his behavior? Turns out there is. For example, we could follow the post on Savage's theorem (see Postulate #4). If the sure payoff of
16 paperclips = 4 utilons
is strictly preferred to the bet
{P(9 paperclips) = 0.5; P(25 paperclips) = 0.5} = 4 utilons,
then there must also exist some finite δ > 0 such that the agent must strictly prefer a guaranteed 4 utilons to betting on
{P(9) = 0.5 - δ; P(25) = 0.5 + δ) = 4 + 2δ utilons
- all at the loss of 2δ expected utilons! This is also equivalent to our agent being willing to pay a finite amount of paperclips to substitute the bet with the sure deal of the same expected utility.
What we have just seen falls pretty nicely within the concept of a bias. Our agent has a perfectly fine utility function, but it also has this other thing - let's name it "risk aversion" - that makes the agent's behavior fall short of being perfectly rational, and is independent of its concave utility function for paperclips. (Note that our agent has linear utility for utilons, but is still willing to pay some amount of those to achieve certainty) Can we somehow fix our agent? Let's see if we can redefine our utility function u'(p) in some way so that it gives us a consistent preference of
guaranteed 16 paperclips
over the
{P(9) = 0.5; P(25) = 0.5}
bet, but we would also like to request that the agent would still strictly prefer the bet
{P(9 + δ) = 0.5; P(25 + δ) = 0.5}
to {P(16) = 1} for some finite δ > 0, so that our agent is not infinitely risk-averse. Can we say anything about this situation? Well, if u'(p) is continuous, there must also exist some number δ' such that 0 < δ' < δ and our agent will be indifferent between {P(16) = 1} and
{P(9 + δ') = 0.5; P(25 + δ') = 0.5}.
And, of course, being risk-averse (in the above-defined sense), our supposedly rational agent will prefer - no harm done - the guaranteed payoff to the bet of the same expected utility u'... Sounds familiar, doesn't it?
I would like to stress again that, although our first agent does have a concave utility function for paperclips, which causes it to reject bets with some expected payoff of paperclips to guaranteed payoffs of less paperclips, it still maximizes its expected utilons, for which it has linear utility. Our second agent, however, has this extra property that causes it to sacrifice expected utilons to achieve certainty. And it turns out that with this property it is impossible to define a well-behaved utility function! Therefore it seems natural to distinguish being rational with a concave utility function, on the one hand, from, on the other hand, being risk-averse and not being able to have a well-behaved utility function at all. The latter case seems much more subtle at the first sight, but causes a more fundamental kind of problem. Which is why I feel that a clear, even if minor, distinction between the two situations is still worth making explicit.
A rational agent can have a concave utility function. A risk-averse agent can not be rational.
(Of course, even in the first case the question of whether we want a concave utility function is still open.)
See also: Diminishing marginal utility of wealth cannot explain risk aversion. Which I found in the comment here: http://lesswrong.com/lw/15f/misleading_the_witness/11ad but I think I read in another thread on lesswrong which I can't find at the moment
As for me, one of the main reasons I wouldn't take a bet winning $110 or losing $100 is that I would take the existence of someone willing to offer such a bet as evidence that there's something about the coin to be flipped that they know and I don't; if such a bet was implemented in a way that's very hard for either partner to game (e.g. getting one random bit from random.org with both of us looking at the computer) I'd likely take it, but I don't anticipate being offered such a bet in the foreseeable future.
I think some of the refused bets on the right