I prefer the movie Twelve Monkeys to Akira. I prefer Akira to David Attenborough's Life in the Undergrowth. And I prefer David Attenborough's Life in the Undergrowth to Twelve Monkeys.
I have intransitive preferences. But I don't suffer from this intransitivity. Up until the moment I'm confronted by an avatar of the money pump, juggling the three DVD boxes in front of me with a greedy gleam in his eye. He'll arbitrage me to death unless I snap out of my intransitive preferences and banish him by putting my options in order.
Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences. Money pumps are a form of arbitrage, exploiting the lack of consistency, transitivity or independence in people's preferences. In most cases, arbitrage ultimately destroys itself: people either wise up to the exploitation and get rid of their vulnerabilities, or lose all their money, leaving only players who are not vulnerable to arbitrage. The crash and burn of the Long-Term Capital Management hedge fund was due in part to the diminishing returns of their arbitrage strategies.
Most humans to not react to the possibility of being arbitraged by changing their whole preference systems. Instead they cling to their old preferences as much as possible, while keeping a keen eye out to avoid being taken advantage of. They keep their inconsistent, intransitive, dependent systems but end up behaving consistently, transitively and independently in their most common transactions.
The weaknesses of this approach are manifest. Having one system of preferences but acting as if we had another is a great strain on our poor overloaded brains. To avoid the arbitrage, we need to scan present and future deals with great keenness and insight, always on the lookout for traps. Since transaction costs shield us from most of the negative consequences of imperfect decision theories, we have to be especially vigilant as transaction costs continue to drop, meaning that opportunities to be arbitraged will continue to rise in future. Finally, how we exit the trap of arbitrage depends on how we entered it: if my juggling Avatar had started me on Life in the Undergrowth, I'd have ended up with Twelve Monkeys, and refused the next trade. If he'd started me on Twelve Monkeys, I've had ended up with Akira. These may not have been the options I'd have settled on if I'd taken the time to sort out my preferences ahead of time.
For these reasons, it is much wiser to change our decision theory ahead of time to something that doesn't leave us vulnerable to arbitrage, rather than clinging nominally to our old preferences.
Inconsistency or intransitivity leaves us vulnerable to a strong money pump, so these we should avoid. Violating independence leaves us vulnerable to a weak money pump, which also means giving up free money, so this should be avoided too. Along with completeness (meaning you can actually decide between options) and the technical assumption of continuity, these make up the von Neumann-Morgenstern axioms of expected utility. Thus if we want to avoid being arbitraged, we should cleave to expected utility.
But the consequences of arbitrage do not stop there.
Quick, which would you prefer, ¥10 000 with certainty, or a 50% chance of getting ¥20 000? Well, it depends on how your utility scales with cash. If it scales concavely, then you are risk averse, while if it scales convexly, then... Stop. Minus the transaction costs, those two options are worth exactly the same thing. If they are freely tradable, then you can exchange them one for one on the world market. Hence if you price the 50% contract at any value other than ¥10 000, you can be arbitraged if you act on your preferences (neglecting transaction costs). People selling to or buying contracts from you will make instant free money on the trade. Money that would be yours instead if your preferences were other.
Of course, you could keep your non-linear utility, and just behave as if it were linear, because of the market price, while being risk-averse in secret... But just as before, this is cumbersome, complicated and unnecessary. Exactly as arbitrage makes you cleave to independence, it will make your utility linear in money - at least for small, freely tradable amounts.
In conclusion:
- Avoiding arbitrage forces your decision theory to follow the axioms of expected utility. It further forces your utility to be linear for any small quantity of money (or any other fungible asset). Thus you will follow expected cash.
Addendum: If contracts such as L = {¥20 000 if a certain coin comes up heads/tails} were freely tradable, they would cost ¥10 000.
Proof: Let LH be the contract that gives out ¥20 000 if that coin comes out heads; LT be the contract if that same coin comes out tails. LH and LT together are exactly the same as a guaranteed ¥20 000. However, individually, LH and LT are the same contract - 50% chance of ¥20 000 - thus by the Law of One Price, they must have the same price (you can get the same result by symmetry). Two contracts with the same price, totalling ¥20 000 together: they must individualy be worth ¥10 000.
The phrasing incidentally is still a bit off. LH and LT are not indistinguishable contracts, since the contingencies in which they pay out is different. The things you should apply the law of one price to is the portfolio consisting of two units of "always pay 10,000" versus the portfolio consisting of one unit of LH and one unit of LT. Those two portfolios behave the same in all possible worlds, and therefore must have the same price.
Whether a risk can be hedged against or not is kindof the ultimate question of all financial markets -- almost all interesting instruments (futures, options, CDSs, etc) are designed specifically to make hedging easier. Clearly some risk can't be hedged -- if Omega drops by and says "I'll give you 10,000 iff my quantum coin comes up tails", then that introduces some irreducible uncertainty into the system, and some speculator somewhere has to be compensated for taking it on. Of course, you can always buy insurance for the event that the coin does not come up tails, but then the person selling the insurance is taking on the risk and will want to be compensated according to their risk preferences.
On the other hand, suprisingly many risks can be hedged against. Figuring out how to hedge some risk which other people had not seen how to is the basis for all clever arbitrage trades.
A particularly interesting example of this is option pricing. A put option essentially is a tool for reducing variance (by eliminating cases where you lose much money because your stock decreases in value), so the price of the put option should be a direct indication of how much a risk-averse investor values the resulting decrease in variance. However, what Black and Scholes noticed was that, actually, provided the underlying stock price changes smoothly enough (follows log-normal Brownian motion), the same risk that the option allows you to eliminate can already be hedged away by just shorting the right amount of stock. So the risk is hedgable, writers of the option should not be compensated for taking it on, and option prices are exactly the same as if everyone was risk-neutral.
On the other hand, if the price of the underlying stock does not change smoothly -- if it has random "chrashes" where it suddenly jumps a lot -- then the risk mitigated by the option is not hedgable, and we can no longer price the option without knowing what the risk preferences of the investors are. Real-life option prices do not exactly follow the Black-Scholes model (they have so-called "volatility smiles"), which indicates that in the real world, for whatever reason, the corresponding risks are actually not completely hedgable.
Interesting. I'm sure the extra risk can still be hedged or reduced (as long as each contract has an "anti-contract" that pays out exactly the reverse), but it seems this is not exactly how the market operates in practice.