If you have a consistent utility function over outcomes, you cannot be money-pumped. This is not a utility function over changes in money, it is a utility function over total money.
This actually struck me as a problem with your argument from earlier, though I didn't point it out at that time. I think you plain don't understand expected utility, actually.
In the above, the question is the preference between a mix of (.5(B + 1) + .5(B + 2)) vs. (1(B + 1.49)) and a consistent version of a human (as opposed to an actual human) would prefer the former lottery given at least a hundred bucks in bank B. After that, of course, the amount in B changes. But if you start by putting consistent utilities over the total amount of money, you cannot be money-pumped.
You were correct. I think that now I understand expected utility; I was arrogant enough to follow my mathematical intuitions, assuming the details would fix themselves later, rather than working it all through. What I would never have done in a published paper, I did in a blog post.
I apologise. The post has been retracted.
Can you please write a post on what your old incorrect understanding of expected utility was, and why it was wrong (before it fades away completely)? I suspect your confusion to be a common one, and writing it down would help others. Think of it as payback for those who tried (unsuccessfully, until Eliezer's attempt) to point out that perhaps you didn't understand expected utility correctly.
I've added that to the post now - a sketch of the original, and what went wrong (simple version: I applied financial/arbitrage insights to utility, whithout realising that the mere existence of investors and arbitragers in the world would change the price you put on something).
Think of it as payback for those who tried (unsuccessfully, until Eliezer's attempt) to point out that perhaps you didn't understand expected utility correctly.
Oh, it wasn't Eliezer pointing it out that made me realise it; it was me trying to prove Eliezer wrong that did the trick.
If you have a consistent utility function over outcomes, you cannot be money-pumped.
If your utility is convex in money and you follow independence, I can money pump you no matter what the situation, as L will always be worth more to you than £1.50. I will continue offering you that contract until you have no cash left, an event that is certain to eventually happen. So your statement is incorrect.
If your utility function is concave in money, it's a little harder, but I can use options. Contract A will give out £1 if a coin comes up head; contract B will give you £1 if that same coin comes up tails. I offer you cash for the possibility of buying these contracts from you for free (should you ever get your hands on them), as long as your capital is within £2 of your current amount. You should name a price less than 0.50 for these options, including a small utility profit for you; I take one option out on each of A and B. I then sell you A and B together, for £1 (since together they are exactly the same as a certain £1). I then exercise both my options and get A back, then B.
Of course, you would never do anything as stupid as accepting the contracts I've just described; but the fact remains that if your utility is not linear in money, you cannot put consistent prices on contracts and their combinations, so will end up losing if ever you blindly follow your utility function.
I will continue offering you that contract until you have no cash left, an event that is certain to eventually happen.
Only if you have an infinite bankroll. Otherwise, there is some tiny but nonzero chance that you lose all your money and the player makes a huge profit. And for the player with the convex utility function, the utility of that outcome is enough to make the whole ensemble of gambles worthwhile.
Then if you extend that to the infinite case by putting the limit outside the expected utility calculation, you will find that the limit is nonnegative too. Or if you don't assume that the result in the infinite case is the limit of finite results, then you have different problems, but then who says the strategy in the infinite case is the same as the limit of finite strategies?
You should name a price less than 0.50 for these options, including a small utility profit for you; I take one option out on each of A and B.
To pick a concave function at random, let U(x£) = log10(x) utilons. And let my bank account contain 10£ at the beginning of the experiment.
U(10£) = EU(9£+A+B) = 1u, so I pay 1£ for options A+B.
Assume WLOG that I'm considering option A first. EU(y+B) = .5*U(y) + .5*U(y+1£). Set that equal to 1u and solve for y: y=9.51249£. Thus I'm indifferent to selling option A for 0.51249£.
After doing so, I am then indifferent to selling option B for 0.48751£.
So I'm back to exactly 10£. No money pump.
The outcomes of utility-that-is-whole-thing can't be repeated, as roughly speaking a whole history of transactions counts as one outcome.
The nature of the pump is not clear to me. What is the repeated action referred to by the ambiguous "this"? If it involves buying L from me at prices lower than the expected value $1.5, it runs into the difficulty of where I'm getting this infinite supply of lotteries to sell you.
Upvoted, because that is indeed a problem. With a utility concave in cash, unless you happen to have an infinity of lottery tickets to hand, you cannot be money pumped in this way.
You can, however, be exploited because of your inability to correctly price dependent contracts.
This post has been retracted because it is in error.
Thankyou Stuart. I enjoy reversing my downvotes.
Some other possibilities:
I am risk averse
I don't want to spend time on a lottery for one or two dollars
I expect that such an offer must be a trick, so I refuse the offer even if you'd offer a better than fair gamble
Accepting an offer is submissive behaviour, so I refuse the offer even if you'd offer a better than fair gamble unless we are in a situation where you could get a few dollars just because I am afraid or because it is expected of me
Retributive morality: The utility of punching you in the face is very small or negative at the start of our interaction. When I notice that you are taking advantage of me the utility of punching you in the face becomes much bigger than a few dollars. This is pretty much natural human behaviour, so it is what I think of every time people start to use dutch books as arguments.
This post has been retracted because it is in error. Trying to shore it up just involved a variant of the St Petersburg Paradox and a small point on pricing contracts that is not enough to make a proper blog post.
I apologise.
Edit: Some people have asked that I keep the original up to illustrate the confusion I was under. I unfortunately don't have a copy, but I'll try and recreate the idea, and illustrate where I went wrong.
The original idea was that if I were to offer you a contract L that gained £1 with 50% probability or £2 with 50% probability, then if your utility function wasn't linear in money, you would generally value L at having a value other that £1.50. Then I could sell or buy large amounts of these contracts from you at your stated price, and use the law of large number to ensure that I valued each contract at £1.50, thus making a certain profit.
The first flaw consisted in the case where your utility is concave in cash ("risk averse"). In that case, I can't buy L from you unless you already have L. And each time I buy it from you, the mean quantity of cash you have goes down, but your utility goes up, since you do not like the uncertainty inherent in L. So I get richer, but you get more utility, and once you've sold all L's you have, I cannot make anything more out of you.
If your utility is convex in cash ("risk loving"), then I can sell you L forever, at more than £1.50. And your money will generally go down, as I drain it from you. However, though the median amount of cash you have goes down, your utility goes up, since you get a chance - however tiny - of huge amounts of cash, and the utility generated by this sum swamps the fact you are most likely ending up with nothing. If I could go on forever, then I can drain you entirely, as this is a biased random walk on a one-dimensional axis. But I would need infinite ressources to do this.
The major error was to reason like an investor, rather than a utility maximiser. Investors are very interested in putting prices on objects. And if you assign the wrong price to L while investing, someone will take advantage of you and arbitrage you. I might return to this in a subsequent post; but the issue is that even if your utility is concave or convex in money, you would put a price of £1.50 on L if L were an easily traded commodity with a lot of investors also pricing it at £1.50.