Vladimir_Nesov comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 06 August 2009 10:46:31PM *  7 points [-]

Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0<U1. It presents a choice between walking away, that is deciding expected utility U1, and playing a lottery that gives you with equal (50%) probability either U0 or U1+3*(U1-U0). Then, expected utility of the lottery is 0.5*(4*U1-2*U0)=U1+(U1-U0)>U1.

My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.

Comment author: PhilGoetz 06 August 2009 11:09:55PM *  2 points [-]

Does my entire post boil down to this seeming paradox?

(Yes, I assume Omega can actually double utility.)

The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There's also too much change in going from "you die" to "you get utility U0". There's some semantic trickiness there.

Comment author: Eliezer_Yudkowsky 07 August 2009 12:37:56AM 11 points [-]

Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.

Comment author: PhilGoetz 07 August 2009 03:52:55AM 1 point [-]

Well, that leaves me even less optimistic than before. As long as it's just me saying, "We have options A, B, and C, but I don't think any of them work," there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can't figure out a way around that math problem, hope is harder.

Comment author: TimFreeman 16 May 2011 08:28:58PM 0 points [-]

There's an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can't compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.

That demolishes "Shut up and Multiply", IMO.

SIAI apparently paid Peter to produce that. It should get more attention here.

Comment author: Vladimir_Nesov 16 May 2011 08:55:29PM *  2 points [-]

So if Omega can double your utility an unlimited number of times

This was not assumed, I even explicitly said things like "I take the lottery as many times as Omega has to offer" and "If you really do possess the ability to double utility". To the extent doubling of utility is actually provided (and no more), we should take the lottery.

Comment author: Larks 16 May 2011 09:06:13PM *  3 points [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.

Comment author: TimFreeman 16 May 2011 09:14:40PM 1 point [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply.

How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Comment author: Vladimir_Nesov 16 May 2011 09:35:45PM *  4 points [-]

The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Not so. There's also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.

(This is of course not relevant to Peter's model, but if you want to look at the underlying questions, then these strange constructions apply.)