In a previous article I've demonstrated that you can only avoid money pumps and arbitrage by using the von Neumann-Morgenstern axioms of expected utility. I argued in this post that even if you're not likely to face a money pump on one particular decision, you should still use expected utility (and sometimes expected money), because of the difficulties of combining two decision theories and constantly being on the look-out for which one to apply.

Even if you don't care about (weak) money pumps, expected utility sneaks in under much milder conditions. If you have a quasi-utility function (i.e. you have an underlying utility function, but you also care about the shape of the probability distribution), then this post demonstrates that you should generally stick with expected utility anyway, just by aggregating all your decisions.

So the moral of looking at money pumps, arbitrage and aggregation is that you should use expected utility for nearly all your decisions.

But the moral says exactly what it says, and nothing more. There are situations where there is not the slightest chance of you being money-pumped, or of aggregating enough of your decisions to achieve a narrow distribution. One-shot versions of Pascal's mugging, the Lifespan Dilemma, utility versions of the St Petersburg paradox, the risk to humanity of a rogue God-AI... Your behaviour on these issues is not constrained by money-pump considerations, nor should you behave as if they were, or as if expected utility had some magical claim to validity here. If you expect to meet Pascal's mugger 3^^^3 times, then you have to use expected utility; but if you don't, you don't.

In my estimation, the expected utility for the singularity institute's budget grows much faster than linearly with cash. But I would be most disappointed if the institute sunk all its income into triple-rollover lottery tickets. Expected utility is ultimately the correct decision theory; but if you most likely don't live to see that ultimately, then this isn't relevant.

In these extreme events, I'd personally advocate a quasi-utility function along with a decision theory that penalises monstrously large standard deviations, as long as these are rare. This solves all the examples above to my satisfaction, and can easily be tweaked to merge gracefully into expected utility as the number of extreme events rises to the point where they are no longer extreme. A heuristic as to when this point arrives is whether you can easily avoid money pumps just by looking out for them, or whether this is getting too complicated for you.

There is no reason that anyone else's values should compel them towards the same decision theory as me; but in these extreme situations, expected utility is just another choice, rather than a logical necessity.

New Comment
22 comments, sorted by Click to highlight new comments since:

In my estimation, the expected utility for the singularity institute's budget grows much faster than linearly with cash. But I would be most disappointed if the institute sunk all its income into triple-rollover lottery tickets.

...Now I'm stuck wondering why they don't do that. Eliezer tries to follow expected utility, AFAIK.

Obvious guess: Eli^H^H^H Michael Vassar doesn't think SIAI's budget shows increasing marginal returns. (Nor, for what it's worth, can I imagine why it would.)

That one's easy: successfully saving the world requires more money than they have now, and if they don't reach that goal, it makes little difference how much money they raise. Eliezer believes most non-winning outcomes are pretty much equivalent:

Mostly, the meddling dabblers won't trap you in With Folded Hands or The Metamorphosis of Prime Intellect. Mostly, they're just gonna kill ya.

(from here)

But cf. also:

I doubt my ability to usefully spend more than $10 million/year on the Singularity. What do you do with the rest of the money?

And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI's work and what it could do with more money.

I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets - but I'm still happy that they don't.

(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn't constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn't make it worthwhile).

Maybe he thinks they'd get less donations in the long term if he did something like that.

Presumably they think there's another approach that gives a higher probability of raising enough funds. Lotteries usually don't pay out, after all.

That type of reasoning is not expected utility - it's most likely outcomes, which is very different.

No, if utility is a step function of money, Pavitra's reasoning agrees with expected utility.

Does the SIAI really have an approach to fundraising that's better than lotteries? What is it then?

Does the SIAI really have an approach to fundraising that's better than lotteries? What is it then?

Fraud has the right payoff structure. When done at the level that SIAI could probably manage it gives significant returns and the risk is concentrated heavily in low probability 'get caught and have your entire life completely destroyed' area. If not raising enough money is an automatic fail then this kind of option is favoured by mathematics (albeit not ethics).

The point of the article you linked to behind the word ethics is that upholding ethics is rational.

The point of the article you linked to behind the word ethics is that upholding ethics is rational.

Precisely the reason I included it.

(Note that lack of emphasis on 'is' is mine. I also do not link 'rational' to shut up and multiply in the context of decision making with ethical injunctions. It is more like 'shut up, multiply and then do a sanity check on the result'.)

It is more like 'shut up, multiply and then do a sanity check on the result'.

You're still treating the ethics as a separate step from the math. I'm arguing that the probability of making a mistake in your reasoning should be part of the multiplication: you should be able to assign an exact numerical confidence to your own sanity, and evaluate the expected utility of various courses of action, including but not limited to asking a friend whether you're crazy.

You're still treating the ethics as a separate step from the math.

Yes, more or less. I do not rule out returning to math once the injunction is triggered either to reassess the injunction or to consider an exception. That is the point. This is not the same principle as 'allow for the chance that I am crazy'.

I'm arguing that the probability of making a mistake in your reasoning should be part of the multiplication: you should be able to assign an exact numerical confidence to your own sanity, and evaluate the expected utility of various courses of action, including but not limited to asking a friend whether you're crazy.

If I could do this reliably then I would not need to construct ethical injunctions to protect me from myself. I do not believe you are correctly applying the referenced concepts.

If I could do this reliably then I would not need to construct ethical injunctions to protect me from myself. I do not believe you are correctly applying the referenced concepts.

That's like saying "If I could build a house reliably then I would not need to protect myself from the weather." Reliably including the probability of error in your multiplication constitutes following ethical injunctions to protect you from yourself. Ethics does not stop being ethics just because you found out that it can be described mathematically.

Reliably including the probability of error in your multiplication constitutes following ethical injunctions to protect you from yourself.

I do not agree. We are using the phrase ethical injunction to describe a different concept.

No, if utility is a step function of money, Pavitra's reasoning agrees with expected utility.

And if there were only two steps...

Right. The assumption is that the final outcome is pass/fail -- either you get enough money and the Singularity is Friendly, or you don't and we all die (hopefully).

[-][anonymous]00

The outcome is uncertain. Expected utility of money is certainly not a step function.

the singularity institute's budget grows much faster than linearly with cash. ... sunk all its income into triple-rollover lottery tickets

I had the same idea of buying very risky investments. Intuitively, it seems that world-saving probability is superlinear in cash. But I think that the intuition is probably incorrect, though I'll have to rethink now that someone else has had it.

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

If you had such an attitude then the lottery is irrelevant. You don't care what the 'world-saving probability' is so don't need to manipulate it.

Yes, but you can manipulate whether the world getting saved had anything to do with you, and you can influence what kind of world you survive into.

If you make a low-probability, high reward bet that and really commit to donating the money to an X-risks organization, you may find yourself winning that bet more often than you would probabilistically expect.

In general, QI means that you care about the nature of your survival, but not whether you survive.