In the comments to this post, several people independently stated that being risk-averse is the same as having a concave utility function. There is, however, a subtle difference here. Consider the example proposed by one of the commenters: an agent with a utility function
u = sqrt(p) utilons for p paperclips.
The agent is being offered a choice between making a bet with a 50/50 chance of receiving a payoff of 9 or 25 paperclips, or simply receiving 16.5 paperclips. The expected payoff of the bet is a full 9/2 + 25/2 = 17 paperclips, yet its expected utility is only 3/2 + 5/2 = 4 = sqrt(16) utilons which is less than the sqrt(16.5) utilons for the guaranteed deal, so our agent goes for the latter, losing 0.5 expected paperclips in the process. Thus, it is claimed that our agent is risk averse in that it sacrifices 0.5 expected paperclips to get a guaranteed payoff.
Is this a good model for the cognitive bias of risk aversion? I would argue that it's not. Our agent ultimately cares about utilons, not paperclips, and in the current case it does perfectly fine at rationally maximizing expected utilons. A cognitive bias should be, instead, some irrational behavior pattern that can be exploited to take utility (rather than paperclips) away from the agent. Consider now another agent, with the same utility function as before, but who just has this small additional trait that it would strictly prefer a sure payoff of 16 paperclips to the above bet. Given our agent's utility function, 16 is the point of indifference, so could there be any problem with his behavior? Turns out there is. For example, we could follow the post on Savage's theorem (see Postulate #4). If the sure payoff of
16 paperclips = 4 utilons
is strictly preferred to the bet
{P(9 paperclips) = 0.5; P(25 paperclips) = 0.5} = 4 utilons,
then there must also exist some finite δ > 0 such that the agent must strictly prefer a guaranteed 4 utilons to betting on
{P(9) = 0.5 - δ; P(25) = 0.5 + δ) = 4 + 2δ utilons
- all at the loss of 2δ expected utilons! This is also equivalent to our agent being willing to pay a finite amount of paperclips to substitute the bet with the sure deal of the same expected utility.
What we have just seen falls pretty nicely within the concept of a bias. Our agent has a perfectly fine utility function, but it also has this other thing - let's name it "risk aversion" - that makes the agent's behavior fall short of being perfectly rational, and is independent of its concave utility function for paperclips. (Note that our agent has linear utility for utilons, but is still willing to pay some amount of those to achieve certainty) Can we somehow fix our agent? Let's see if we can redefine our utility function u'(p) in some way so that it gives us a consistent preference of
guaranteed 16 paperclips
over the
{P(9) = 0.5; P(25) = 0.5}
bet, but we would also like to request that the agent would still strictly prefer the bet
{P(9 + δ) = 0.5; P(25 + δ) = 0.5}
to {P(16) = 1} for some finite δ > 0, so that our agent is not infinitely risk-averse. Can we say anything about this situation? Well, if u'(p) is continuous, there must also exist some number δ' such that 0 < δ' < δ and our agent will be indifferent between {P(16) = 1} and
{P(9 + δ') = 0.5; P(25 + δ') = 0.5}.
And, of course, being risk-averse (in the above-defined sense), our supposedly rational agent will prefer - no harm done - the guaranteed payoff to the bet of the same expected utility u'... Sounds familiar, doesn't it?
I would like to stress again that, although our first agent does have a concave utility function for paperclips, which causes it to reject bets with some expected payoff of paperclips to guaranteed payoffs of less paperclips, it still maximizes its expected utilons, for which it has linear utility. Our second agent, however, has this extra property that causes it to sacrifice expected utilons to achieve certainty. And it turns out that with this property it is impossible to define a well-behaved utility function! Therefore it seems natural to distinguish being rational with a concave utility function, on the one hand, from, on the other hand, being risk-averse and not being able to have a well-behaved utility function at all. The latter case seems much more subtle at the first sight, but causes a more fundamental kind of problem. Which is why I feel that a clear, even if minor, distinction between the two situations is still worth making explicit.
A rational agent can have a concave utility function. A risk-averse agent can not be rational.
(Of course, even in the first case the question of whether we want a concave utility function is still open.)
Won't you get behavior practically indistinguishable from the 'slightly risk averse agent with sqrt(x) utility function' by simply using x^0.499 as utility function?
Also, by the way. The resulting final utility function for any sort of variable needs not be smooth, monotonously growing, or inexpensive to calculate.
Consider my utility function for food obtained by me right now. Slightly more than is optimal for me to eat before it spoils, in the summer without fridge, would give no extra utility what so ever over the right amount; or results in the dis-utility (more trash). A lot more may make it worth it to invite a friend for dinner and utility starts growing again.
Essentially the utility peaks then starts going down, then at some not very well defined point utility suddenly starts growing again.
There can be all sorts of really odd looking 'irrational' heuristics that work as a better substitute for true utility function which is expensive to calculate (but is known to follow certain broken line pattern), than some practical to compute utility function.
WRT utility of extra money... money themselves are worth nothing, it's the changes to your life you can make with them, that matter. As it is, I would take 10% shot at 10 millions $ over 100 000 for certain; 15 years ago I would take 10 000 for certain over 10% shot at 10 million (of course in the latter case it ought to be possible to partner up with someone who has big capital to get say 800 000 for certain).
Ultimately, attaching utility functions to stuff is like considering a fairly bad chess AI that just sums values of pieces and positional features perhaps. That sort of AI, running on same hardware, is going to lose big time to AIs with more clever board evaluation than that.