PhilGoetz comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread.

Comment author: PhilGoetz 06 August 2009 09:09:06PM *  10 points [-]

Here's a possible problem with my analysis:

Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

How many cards do you draw?

I'm pretty sure that someone who believes in many worlds will keep drawing cards until they die. But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

So this whole post may boil down to "maximizing expected utility" not actually being the right thing to do. Also see my earlier, equally unpopular post on why expectation maximization implies average utilitarianism. If you agree that average utilitarianism seems wrong, that's another piece of evidence that maximizing expected utility is wrong.

Comment author: Vladimir_Nesov 06 August 2009 10:46:31PM *  7 points [-]

Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0<U1. It presents a choice between walking away, that is deciding expected utility U1, and playing a lottery that gives you with equal (50%) probability either U0 or U1+3*(U1-U0). Then, expected utility of the lottery is 0.5*(4*U1-2*U0)=U1+(U1-U0)>U1.

My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.

Comment author: PhilGoetz 06 August 2009 11:09:55PM *  2 points [-]

Does my entire post boil down to this seeming paradox?

(Yes, I assume Omega can actually double utility.)

The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There's also too much change in going from "you die" to "you get utility U0". There's some semantic trickiness there.

Comment author: Eliezer_Yudkowsky 07 August 2009 12:37:56AM 11 points [-]

Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.

Comment author: PhilGoetz 07 August 2009 03:52:55AM 1 point [-]

Well, that leaves me even less optimistic than before. As long as it's just me saying, "We have options A, B, and C, but I don't think any of them work," there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can't figure out a way around that math problem, hope is harder.

Comment author: TimFreeman 16 May 2011 08:28:58PM 0 points [-]

There's an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can't compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.

That demolishes "Shut up and Multiply", IMO.

SIAI apparently paid Peter to produce that. It should get more attention here.

Comment author: Vladimir_Nesov 16 May 2011 08:55:29PM *  2 points [-]

So if Omega can double your utility an unlimited number of times

This was not assumed, I even explicitly said things like "I take the lottery as many times as Omega has to offer" and "If you really do possess the ability to double utility". To the extent doubling of utility is actually provided (and no more), we should take the lottery.

Comment author: Larks 16 May 2011 09:06:13PM *  3 points [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.

Comment author: TimFreeman 16 May 2011 09:14:40PM 1 point [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply.

How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Comment author: Vladimir_Nesov 16 May 2011 09:35:45PM *  4 points [-]

The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Not so. There's also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.

(This is of course not relevant to Peter's model, but if you want to look at the underlying questions, then these strange constructions apply.)

Comment author: HopeFox 17 May 2011 10:47:38AM 2 points [-]

"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

Sorry if this question has already been answered (I've read the comments but probably didn't catch all of it), but...

I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you get the entire (now doubled) utility for those few minutes? Do you get pro rata utility for those few minutes divided by your expected lifespan?

Related to this is the question of the utility penalty of dying. If your utility function includes benefits for other people, then your best bet is to draw cards until you die, because the benefits to the rest of the universe will massively outweigh the inevitability of your death.

If, on the other hand, death sets your utility to zero (presumably because your utility function is strictly only a function of your own experiences), then... yeah. If Omega really can double your utility every time you win, then I guess you keep drawing until you die. It's an absurd (but mathematically plausible) situation, so the absurd (but mathematically plausible) answer is correct. I guess.

Comment author: taw 07 August 2009 12:20:59AM *  2 points [-]

Can utility go arbitrarily high? There are diminishing returns on almost every kind of good thing. I have difficulty imagining life with utility orders of magnitude higher than what we have now. Infinitely long youth might be worth a lot, but even that is only so many doublings due to discounting.

I'm curious why it's getting downvoted without reply. Related thread here. How high do you think "utility" can go?

Comment author: PhilGoetz 07 August 2009 02:53:14PM 7 points [-]

I would guess you're being downvoted by someone who is frustrated not by you so much as by all the other people before you who keep bringing up diminishing returns even though the concept of "utility" was invented to get around that objection.

"Utility" is what you have after you've factored in diminishing returns.

We do have difficulty imagining orders of magnitude higher utility. That doesn't mean it's nonsensical. I think I have orders of magnitude higher utility than a microbe, and that the microbe can't understand that. One reason we develop mathematical models is that they let us work with things that we don't intuitively understand.

If you say "Utility can't go that high", you're also rejecting utility maximization. Just in a different way.

Comment author: taw 07 August 2009 04:54:13PM 0 points [-]

Nothing about utility maximization model says utility function is unbounded - the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0.

If the function is let's say U(x) = 1 - 1/(1+x), U'(x) = (x+1)^-2, then it's a properly behaving utility function, yet it never even reaches 1.

And utility maximization is just a model that breaks easily - it can be useful for humans to some limited extent, but we know humans break it all the time. Trying to imagine utilities orders of magnitude higher than current gets it way past its breaking point.

Comment author: Nick_Tarleton 07 August 2009 05:35:44PM *  6 points [-]

Nothing about utility maximization model says utility function is unbounded

Yep.

the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0

Utility functions aren't necessarily over domains that allow their derivatives to be scalar, or even meaningful (my notional u.f., over 4D world-histories or something similar, sure isn't). Even if one is, or if you're holding fixed all but one (real-valued) of the parameters, this is far too strong a constraint for non-pathological behavior. E.g., most people's (notional) utility is presumably strictly decreasing in the number of times they're hit with a baseball bat, and non-monotonic in the amount of salt on their food.

Comment author: conchis 11 August 2009 10:05:13PM *  1 point [-]

Sorry for coming late to this party. ;)

Much of this discussion seems to me to rest on a similar confusion to that evidenced in "Expectation maximization implies average utilitarianism".

As I just pointed out again, the vNM axioms merely imply that "rational" decisions can be represented as maximising the expectation of some function mapping world histories into the reals. This function is conventionally called a utility function. In this sense of "utility function", your preferences over gambles determine your utility (up to an affine transform), so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer".* Given standard assumptions about Omega, this pretty obviously means that you accept the offer.

The confusion seems to arise because there are other mappings from world histories into the reals that are also conventionally called utility functions, but which have nothing in particular to do with the vNM utility function. When we read "I'll double your utility" I think we intuitively parse the phrase as referring to one of these other utility functions, which is when problems start to ensue.

Maximising expected vNM utility is the right thing to do. But "maximise expected vNM utility" is not especially useful advice, because we have no access to our vNM utility function unless we already know our preferences (or can reasonably extrapolate them from preferences we do have access to). Maximising expected utilons is not necessarily the right thing to do. You can maximize any (potentially bounded!) positive monotonic transform of utilons and you'll still be "rational".

* There are sets of "rational" preferences for which such a statement could never be true (your preferences could be represented by a bounded utility function where doubling would go above the bound). If you had such preferences and Omega possessed the usual Omega-properties, then she would never claim to be able to double your utility: ergo the hypothetical implicitly rules out such preferences.

NB: I'm aware that I'm fudging a couple of things here, but they don't affect the point, and unfudging them seemed likely to be more confusing than helpful.

Comment author: Vladimir_Nesov 11 August 2009 10:16:47PM *  0 points [-]

so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer"

It's not that easy. As humans are not formally rational, the problem is about whether to bite this particular bullet, showing a form that following the decision procedure could take and asking if it's a good idea to adopt a decision procedure that forces such decisions. If you already accept the decision procedure, of course the problem becomes trivial.

Comment author: conchis 11 August 2009 11:16:53PM *  0 points [-]

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

The former doesn't force such decisions at all. That's precisely why I said that it's not useful advice: all it says is that you should take the gamble if you prefer to take the gamble.* (Moreover, if you did not prefer to take the gamble, the hypothetical doubling of vNM utility could never happen, so the set up already assumes you prefer the gamble. This seems to make the hypothetical not especially useful either.)

On the other hand "maximize expected utilons" does provide concrete advice. It's just that (AFAIK) there's no reason to listen to that advice unless you're risk-neutral over utilons. If you were sufficiently risk averse over utilons then a 50% chance of doubling them might not induce you to take the gamble, and nothing in the vNM axioms would say that you're behaving irrationally. The really interesting question then becomes whether there are other good reasons to have particular risk preferences with respect to utilons, but it's a question I've never heard a particularly good answer to.

* At least provided doing so would not result in an inconsistency in your preferences. [ETA: Actually, if your preferences are inconsistent, then they won't have a vNM utility representation, and Omega's claim that she will double your vNM utility can't actually mean anything. The set-up therefore seems to imply that you preferences are necessarily consistent. There sure seem to be a lot of surreptitious assumptions built in here!]

Comment author: Vladimir_Nesov 12 August 2009 02:10:42PM 0 points [-]

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

[...] you should take the gamble if you prefer to take the gamble

The "prefer" here isn't immediate. People have (internal) arguments about what should be done in what situations precisely because they don't know what they really prefer. There is an easy answer to go with the whim, but that's not preference people care about, and so we deliberate.

When all confusion is defeated, and the preference is laid out explicitly, as a decision procedure that just crunches numbers and produces a decision, that is by construction exactly the most preferable action, there is nothing to argue about. Argument is not a part of this form of decision procedure.

In real life, argument is an important part of any decision procedure, and it is the means by which we could select a decision procedure that doesn't involve argument. You look at the possible solutions produced by many tools, and judge which of them to implement. This makes the decision procedure different from the first kind.

One of the tools you consider may be a "utility maximization" thingy. You can't say that it's by definition the right decision procedure, as first you have to accept it as such through argument. And this applies not only to the particular choice of prior and utility, but also to the algorithm itself, to the possibility of representing your true preference in this form.

The "utilons" of the post linked above look different from the vN-M expected utility because their discussion involved argument, informal steps. This doesn't preclude the topic the argument is about, the "utilons", from being exactly the same (expected) utility values, approximated to suit more informal discussion. The difference is that the informal part of decision-making is considered as part of decision procedure in that post, unlike what happens with the formal tool itself (that is discussed there informally).

By considering the double-my-utility thought experiment, the following question can be considered: assuming that the best possible utility+prior are chosen within the expected utility maximization framework, do the decisions generated by the resulting procedure look satisfactory? That is, is this form of decision procedure adequate, as an ultimate solution, for all situations? The answer can be "no", which would mean that expected utility maximization isn't a way to go, or that you'd need to apply it differently to the problem.

Comment author: conchis 12 August 2009 04:02:27PM *  0 points [-]

I'm struggling to figure out whether we're actually disagreeing about anything here, and if so, what it is. I agree with most of what you've said, but can't quite see how it connects to the point I'm trying to make. It seems like we're somehow managing to talk past each other, but unfortunately I can't tell whether I'm missing your point, you're missing mine, or something else entirely. Let's try again... let me know if/when you think I'm going off the rails here.

If I understand you correctly, you want to evaluate a particular decision procedure "maximize expected utility" (MEU) by seeing whether the results it gives in this situation seem correct. (Is that right?)

My point was that the result given by MEU, and the evidence that this can provide, both depend crucially on what you mean by utility.

One possibility is that by utility, you mean vNM utility. In this case, MEU clearly says you should accept the offer. As a result, it's tempting to say that if you think accepting the offer would be a bad idea, then this provides evidence against MEU (or equivalently, since the vNM axioms imply MEU, that you think it's ok to violate the vNM axioms). The problem is that if you violate the vNM axioms, your choices will have no vNM utility representation, and Omega couldn't possibly promise to double your vNM utility, because there's no such thing. So for the hypothetical to make sense at all, we have to assume that your preferences conform to the vNM axioms. Moreover, because the vNM axioms necessarily imply MEU, the hypothetical also assumes MEU, and it therefore can't provide evidence either for or against it.*

If the hypothetical is going to be useful, then utility needs to mean something other than vNM utility. It could mean hedons, it could mean valutilons,** it could mean something else. I do think that responses to the hypothetical in these cases can provide useful evidence about the value of decision procedures such as "maximize expected hedons" (MEH) or "maximize expected valutilons" (MEV). My point on this score was simply that there is no particular reason to think that either MEH or MEV were likely to be an optimal decision procedure to begin with. They're certainly not implied by the vNM axioms, which require only that you should maximise the expectation of some (positive) monotonic transform of hedons or valutilons or whatever.*** [ETA: As a specific example, if you decide to maximize the expectation of a bounded concave function of hedons/valutilons, then even if hedons/valutilons are unbounded, you'll at some point stop taking bets to double your hedons/valutilons, but still be an expected vNM utility maximizer.]

Does that make sense?

* This also means that if you think MEU gives the "wrong" answer in this case, you've gotten confused somewehere - most likely about what it means to double vNM utility.

** I define these here as the output of a function that maps a specific, certain, world history (no gambles!) into the reals according to how well that particular world history measures up against my values. (Apologies for the proliferation of terminology - I'm trying to guard against the possibility that we're using "utilons" to mean different things without inadvertently ending up in a messy definitional argument. ;))

*** A corollary of this is that rejecting MEH or MEV does not constitute evidence against the vNM axioms.

Comment author: Vladimir_Nesov 12 August 2009 04:38:33PM 0 points [-]

You are placing on a test the following well-defined tool: expected utility maximizer with a prior and "utility" function, that evaluates the events on the world. By "utility" function here I mean just some function, so you can drop the word "utility". Even if people can't represent their preference as expected some-function maximization, such tool could still be constructed. The question is whether such a tool can be made that always agrees with human preference.

An easy question is what happens when you use "hedons" or something else equally inadequate in the role of utility function: the tool starts to make decisions with which we disagree. Case closed. But maybe there are other settings under which the tool is in perfect agreement with human judgment (after reflection).

Utility-doubling thought experiment compares what is better according to the judgment of the tool (to take the card) with what is better according to the judgment of a person (maybe not take the card). As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person't preference (after "careful" reflection), proves that there is no way to set up "utility" and prior so that the "utility" maximization tool represents that person's preference.

Comment author: conchis 12 August 2009 06:08:16PM 1 point [-]

As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person's preference (after "careful" reflection), proves that there is no way to set up "utility"

My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person's considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person's considered preference is). AFAICT, the thought experiment then amounts to asking: "If I should maximize expected utility, should I maximize expected utility?" Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still "yes". But the thought experiment is completely uninformative.

Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don't, could you tell me what part you don't understand so I can try to clarify my thinking?

On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don't think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)

My point is just that the thought experiment is underspecified unless we're clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.

Comment author: PhilGoetz 12 August 2009 06:33:13PM *  1 point [-]

What you just said seems correct.

What was originally at issue is whether we should act in ways that will eventually destroy ourselves.

I think the big-picture conclusion from what you just wrote is that, if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

However, in our circumstances, time discounting and "identity discounting" encourage us to start enjoying and dooming ourselves now; whereas it would probably be better to spread life to a few other galaxies first, and then enjoy ourselves.

(I admit that my use of the word "better" is problematic.)

Comment author: conchis 13 August 2009 09:15:03AM 1 point [-]

if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

Well, I don't disagree with this, but I would still agree with it if you substituted "right" for "wrong", so it doesn't seem like much of a conclusion. ;)

Comment author: Vladimir_Nesov 13 August 2009 07:38:49AM *  0 points [-]

You argue that the thought experiment is trivial and doesn't solve any problems. In my comments above I described a specific setup that shows how to use (interpret) the thought experiment to potentially obtain non-trivial results.

Comment author: conchis 13 August 2009 08:52:01AM *  0 points [-]

I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn't solve any problems. For this definition of utility I argue that your example doesn't work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn't really helping.

For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.

Comment author: Jonathan_Graehl 06 August 2009 11:36:13PM *  1 point [-]

Assuming the utility increase holds my remaining lifespan constant, I'd draw a card every few years (if allowed). I don't claim to maximize "expected integral of happiness over time" by doing so (substitute utility for happiness if you like; but perhaps utility should be forward-looking and include expected happiness over time as just one of my values?). Of course, by supposing my utility can be doubled, I'll never be fully satisfied.

Comment author: dclayh 07 August 2009 07:08:30AM 0 points [-]

The "justified expectation of pleasant surprises", as someone or other said.

Comment author: [deleted] 06 August 2009 10:48:13PM 1 point [-]

It seems like you are assuming that the only effect of dying is that it brings your utility to 0. I agree that after you are dead your utility is 0, but before you are dead you have to die, and I think that is a strongly negative utility event. When I picture my utility playing this game, I think that if I start with X, then I draw a start and have 2X. Then I draw a skull, I look at the skull, my utility drops to -10000X as I shit my pants and beg omega to let me live, and then he kills me and my utility is 0.

I don't know how much sense that makes mathematically. But it certainly feels to me like fear of death makes dying a more negative event than just a drop to utility 0.

Comment author: PhilGoetz 06 August 2009 11:11:15PM 3 points [-]

The skull cards are electrocuted, and will kill you instantly and painlessly as soon as you touch them.

(Be careful to touch only the cards you take.)

Comment author: orthonormal 06 August 2009 09:25:50PM 1 point [-]

I'd wondered why nobody brought up MWI and anthropic probabilities yet.

As for this, it reminds me of a Dutch book argument Eliezer discussed some time ago. His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

In my case (assuming I'm quite confident in Many-Worlds), I might decide to take a card or two, go off and enjoy myself for a week, come back and take another card or two, et cetera.

Comment author: Vladimir_Nesov 06 August 2009 10:42:46PM 4 points [-]

Many worlds have nothing to do with validity of suicidal decisions. If you have an answer that maximizes expected utility but gives almost-certain probability of total failure, you still take it in a deterministic world. There is no magic by which deterministic world declares that the decision-theoretic calculation is invalid in this particular case, while many-worlds lets it be.

Comment author: PhilGoetz 06 August 2009 10:48:13PM *  0 points [-]

I think you're right. Would you agree that this is a problem with following the policy of maximizing expected utility? Or would you keep drawing cards?

Comment author: Cyan 06 August 2009 11:13:28PM 5 points [-]

This is a variant on the St. Petersburg paradox, innit? My preferred resolution is to assert that any realizable utility function is bounded.

Comment author: PhilGoetz 07 August 2009 12:26:42AM *  1 point [-]

Thanks for the link - this is another form of the same paradox orthnormal linked to, yes. The Wikipedia page proposes numerous "solutions", but most of them just dodge the question by taking advantage of the fact that the paradox was posed using "ducats" instead of "utility". It seems like the notion of "utility" was invented in response to this paradox. If you pose it again using the word "utility", these "solutions" fail. The only possibly workable solution offered on that Wikipedia page is:

Rejection of mathematical expectation

Various authors, including Jean le Rond d'Alembert and John Maynard Keynes, have rejected maximization of expectation (even of utility) as a proper rule of conduct. Keynes, in particular, insisted that the relative risk of an alternative could be sufficiently high to reject it even were its expectation enormous.

Comment author: Cyan 07 August 2009 12:30:06AM 1 point [-]

The page notes the reformulation in terms of utility, which it terms "super St. Petersberg paradox". (It doesn't have its own section, or I'd have linked directly to that.) I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

Comment author: Wei_Dai 07 August 2009 01:38:39AM 4 points [-]

I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

I'm afraid I don't understand the difficulty here. Let's assume that Omega can access any point in configuration space and make that the reality. Then either (A) at some point it runs out of things with which to entice you to draw another card, in which case your utility function is bounded or (B) it never runs out of such things, in which case your utility function in unbounded.

Why is this so paradoxical again?

Comment author: Wei_Dai 07 August 2009 09:52:57PM 1 point [-]

After further thought, I see that case (B) can be quite paradoxical. Consider Eliezer's utility function, which is supposedly unbounded as a function of how many years he lives. In other words, Omega can increase Eliezer's utility without bound just by giving him increasingly longer lives. Expected utility maximization then dictates that he keeps drawing cards one after another, even though he knows that by doing so, with probability 1 he won't live to enjoy his rewards.

Comment author: Vladimir_Nesov 07 August 2009 10:16:11PM *  4 points [-]

When you go to infinity, you'd need to define additional mathematical structure that answers your question. You can't just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.

For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn't have the non-zero-utility form, because it doesn't have the tail of infinite number of 2s. It's a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.

Comment author: Alicorn 07 August 2009 09:55:54PM 2 points [-]

Does Omega's utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it'd be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.

Comment author: PhilGoetz 07 August 2009 04:00:32AM 1 point [-]

If it's not paradoxical, how many cards would you draw?

Comment author: Wei_Dai 07 August 2009 09:09:59AM 1 point [-]

I guess no more than 10 cards. That's based on not being able to imagine a scenario such that I'd prefer .999 probability of death + .001 probability of scenario to the status quo. But it's just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.

Comment author: Cyan 07 August 2009 02:14:38AM *  0 points [-]

Yeesh. I'm changing my mind again tonight. My only excuse is that I'm sick, so I'm not thinking as straight as I might.

I was originally thinking that Vladimir Nesov's reformulation showed that I would always accept Omega's wager. But now I see that at some point U1+3*(U1-U0) must exceed any upper bound (assuming I survive that long).

Given U1 (utility of refusing initial wager), U0 (utility of death), U_max, and U_n (utility of refusing wager n assuming you survive that long), it might be possible that there is a sequence of wagers that (i) offer positive expected utility at each step; (ii) asymptotically approach the upper bound if you survive; and (iii) have a probability of survival approaching zero. I confess I'm in no state to cope with the math necessary to give such a sequence or disprove its existence.

Comment author: pengvado 07 August 2009 03:23:55AM *  1 point [-]

There is no such sequence. Proof:

In order for wager n to be nonnegative expected utility, P(death)*U_0 + (1-P(death))*U_(n+1) >= U_n. Equivalently, P(death this time | survived until n) <= (U_(n+1)-U_n) / (U_(n+1)-U0).

Assume the worst case, equality. Then the cumulative probability of survival decreases by exactly the same factor as your utility (conditioned on survival) increases. This is simple multiplication, so it's true of a sequence of borderline wagers too.

With a bounded utility function, the worst sequence of wagers you'll accept in total is P(death) <= (U_max-U0)/(U1-U0). Which is exactly what you'd expect.

Comment author: PhilGoetz 07 August 2009 04:01:27AM 0 points [-]

How would it help if this sequence existed?

Comment author: John_Maxwell_IV 07 August 2009 08:37:05PM *  3 points [-]

Why is rejection of mathematical expectation an unworkable solution?

This isn't the only scenario where straight expectation is problematic. Pascal's Mugging, timeless decision theory, and maximization of expected growth rate come to mind. That makes four.

In my opinion, LWers should not give expected utility maximization the same axiomatic status that they award consequentialism. Is this worth a top level post?

Comment author: Pfft 11 August 2009 02:48:52AM *  1 point [-]

This is exactly my take on it also.

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are concave". This has always struck me as extremely fishy, for two reasons: (a) it gives rise to paradoxes like this, and (b) it doesn't at all match what making a choice feels like for me: if someone offers me a risky bet, I feel inclined to reject it because it is risky, not because I have done some extensive integration of my utility function over all possible outcomes. So it seems a much safer assumption to just assume that people's preferences are a function from probability distributions of outcomes, rather than making the more restrictive assumption that that function has to arise as an integral over utilities of individual outcomes.

So why is the "expected utility" model so popular? A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

So an economist who wants to apply game theory will be inclined to assume that actors are maximizing expected utility; but we LWers shouldn't necessarily.

Comment author: John_Maxwell_IV 11 August 2009 06:01:53PM 0 points [-]

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are convex".

Do you mean concave?

A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

Technically speaking, isn't maximizing expected utility a special case of having preferences about probability distributions about outcomes? So maybe you should instead say "does not work elegantly if they have arbitrary preferences about probability distributions."

This is what I tend to do when I'm having conversations in real life; let's see how it works online :-)

Comment author: Cyan 07 August 2009 11:09:28PM 0 points [-]

Why is rejection of mathematical expectation an unworkable solution?

Well, rejection's not a solution per se until you pick something justifiable to replace it with.

I'd be interested in a top-level post on the subject.

Comment author: Vladimir_Nesov 06 August 2009 11:22:02PM *  0 points [-]

If this condition makes a difference to you, your answer must also be to take as many cards as Omega has to offer.

Comment author: Cyan 06 August 2009 11:29:58PM 0 points [-]

I don't follow.

(My assertion implies that Omega cannot double my utility indefinitely, so it's inconsistent with the problem as given.)

Comment author: Vladimir_Nesov 06 August 2009 11:35:43PM *  2 points [-]

You'll just have to construct a less convenient possible world where Omega has merely trillion cards and not an infinite amount of them, and answer the question about taking a trillion cards, which, if you accept the lottery all the way, leaves you with 2 to the trillionth power odds of dying. Find my reformulation of the topic problem here.

Comment author: PhilGoetz 07 August 2009 12:27:40AM 0 points [-]

Agreed.

Comment author: Cyan 07 August 2009 12:24:19AM 0 points [-]

Gotcha. Nice reformulation.

Comment author: PhilGoetz 06 August 2009 10:34:32PM *  1 point [-]

His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

Where did you get the term "satisfice"? I just read that dutch-book post, and while Eliezer points out the flaw in demanding that the Bayesian take the infinite bet, I didn't see the word 'satisficing' in their anywhere.

Comment author: orthonormal 07 August 2009 03:31:13AM 1 point [-]

Huh, I must have "remembered" that term into the post. What I mean is more succinctly put in this comment.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

This question still confuses me, though; if it's a reasonable strategy to stop at N in the infinite case, but not a reasonable strategy to stop at N if there are only N^^^N iterations... something about it disturbs me, and I'm not sure that Eliezer's answer is actually a good patch for the St. Petersburg Paradox.

Comment author: Jonathan_Graehl 07 August 2009 12:11:21AM 1 point [-]

It's an old AI term meaning roughly "find a solution that isn't (likely) optimal, but good enough for some purpose, without too much effort". It implies that either your computer is too slow for it to be economical to find the true optimum under your models, or that you're too dumb to come up with the right models, thus the popularity of the idea in AI research.

You can be impressed if someone starts with a criteria for what "good enough" means, and then comes up with a method they can prove meets the criteria. Otherwise it's spin.

Comment author: Douglas_Knight 07 August 2009 04:50:32AM 0 points [-]

I'm more used to it as a psychology (or behavior econ) term for a specific, psychologically realistic, form of bounded rationality. In particular, I'm used to it being negative! (that is, a heuristic which often degenerates produces a bias)

Comment author: [deleted] 29 May 2012 03:49:39PM *  0 points [-]

But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

Or unless your utility function is bounded above, and the utility you assign to the status quo is more than the average of the utility of dying straight away and the upper bound of your utility function, in which case Omega couldn't possibly double your utility. (Indeed, I can't think of any X right now such that I'd prefer {50% X, 10% I die right now, 40% business as usual} to {100% business as usual}.)

Comment author: [deleted] 07 August 2009 02:26:17AM 0 points [-]

If I draw cards until I die, my expected utility is positive infinity. Though I will almost surely die and end up with utility 0, it is logically possible that I will never die, and end up with a utility of positive infinity. In this case, 10 + 0(positive infinity) = positive infinity.

The next paragraph requires that you assume our initial utility is 1.

If you want, warp the problem into an isomorphic problem where the probabilities are different and all utilities are finite. (Isn't it cool how you can do that?) In the original problem, there's always a 5/6 chance of utility doubling and a 1/6 chance of it going to 1/2. (Being dead isn't THAT bad, I guess.) Let's say that where your utility function was U(w), it is now f(U(w)), where f(x) = 1 - 1/(2 + log_2 x). In this case, the utilities 1/2, 1, 2, 4, 8, 16, . . . become 0, 1/2, 2/3, 3/4, 4/5, 5/6, . . . . So, your initial utility is 1/2, and Omega will either lower your utility to 0 or raise it by applying the function U' = U/(U + 1). Your expected utility after drawing once was previously U' = 5/3U + 1/2; it's now... okay, my math-stamina has run out. But if you calculate expected utility, and then calculate the probability that results in that expected utility, I'm betting that you'll end up with a 1/2 probability of *ever dying.

(The above paragraph surrounding a nut: any universe can be interpreted as one where the probabilities are different and the utility function has been changed to match... often, probably.)

Comment author: tut 07 August 2009 07:53:49AM 0 points [-]

I don't believe in quantifyable utility (and thus not in doubled utility) so I take no cards. But yeah, that looks like a way to make utilitarian equivalent to suicidal.

Comment author: Aurini 06 August 2009 10:56:07PM 0 points [-]

This is completely off topic (and maybe I'm just not getting the joke) but does Many Worlds necessarily imply many human worlds? Star Trek tropes aside, I was under the impression that Many Worlds only mattered to gluons and Shrodinger's Cat - that us macro creatures are pretty much screwed.

...

You were joking, weren't you? I like jokes.

Comment author: PhilGoetz 06 August 2009 11:14:01PM 0 points [-]

"Many worlds" here is shorthand for "every time some event happens that has more than one possible outcome, for every possible outcome, there is (or comes into being) a world in which that was the outcome."

As far as the truth or falsity of Many Worlds mattering to us - I don't think it can matter, if you maximize expected utility (over the many worlds).

Comment author: Eliezer_Yudkowsky 08 August 2009 11:19:00AM 2 points [-]

That is not what Many Wolds says. It is only about quantum outcomes, not "possible" outcomes.

Comment author: Alicorn 06 August 2009 09:28:54PM -1 points [-]

Double your utility for the rest of your life compared to what? If you draw cards until you die, that sounds like it just means you have twice as much fun drawing cards as you would have without help. I guess that could be lots of fun if you're the kind of person who gets a rush off of Russian roulette under normal circumstances, but if you're not, you'd probably be better off flipping off Omega and watching some TV.

What if your utility would have been negative? Doesn't doubling it make it twice as bad?

Comment author: PhilGoetz 06 August 2009 09:48:12PM 3 points [-]

Good point. Better not draw a card if you have negative utility.

Just trust that Omega can double your utility, for the sake of argument. If you stop before you die, you get all those doublings of utility for the rest of your life.

I'd certainly draw one card. But would I stop drawing cards?

Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility. But we have a mathematical theory about rationality. Just apply that, and you find the results seem unsatisfactory.

Comment author: HopeFox 29 May 2011 11:47:37AM 1 point [-]

Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility

I can't even imagine doubling my utility once, if we're only talking about selfish preferences. If I understand vNM utility correctly, then a doubling of my personal utility is a situation which I'd be willing to accept a 50% chance of death in order to achieve (assuming that my utility is scaled so that U(dead) = 0, and without setting a constant level, we can't talk about doubling utility). Given my life at the moment (apartment with mortgage, two chronically ill girlfriends, decent job with unpleasantly long commute, moderate physical and mental health), and thinking about the best possible life I could have (volcano lair, catgirls), I wouldn't be willing to take that bet. Intuition has already failed me on this one. If Omega can really deliver on his promise, then either he's offering a lifestyle literally beyond my wildest dreams, or he's letting me include my preferences for other people in my utility function, in which case I'll probably have cured cancer by the tenth draw or so, and I'll run into the same breakdown of intuition after about seventy draws, by which time everyone else in the world should have their own volcano lairs and catgirls.

With the problem as stated, any finite number of draws is the rational choice, because the proposed utility of N draws outweighs the risk of death, no matter how high N is. The probability of death is always less than 1 for a finite number of draws. I don't think that considering the limit as N approaches infinity is valid, because every time you have to decide whether or not to draw a card, you've only drawn a finite number of cards so far. Certainty of death also occurs in the same limit as infinite utility, and infinite utility has its own problems, as discussed elsewhere in this thread. It might also leave you open to Pascal's Scam - give me $5 and I'll give you infinite utility!

But we have a mathematical theory about rationality. Just apply that, and you find the results seem unsatisfactory.

I agree - to keep drawing until you draw a skull seems wrong. However, to say that something "seems unsatisfactory" is a statement of intuition, not mathematics. Our intuition can't weigh the value of exponentially increasing utility against the cost of an exponentionally diminishing chance of survival, so it's no wonder that the mathematically derived answer doesn't sit well with intuition.