lfghjkl comments on Pascal's Muggle (short version) - Less Wrong

29 Post author: Eliezer_Yudkowsky 05 May 2013 11:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: lfghjkl 06 May 2013 02:26:11AM 5 points [-]

Since a human mind really can't naturally conceive of the difference between huge numbers like these, wouldn't it follow that our utility functions are bounded by an horizontal asymptote? And shouldn't that solve this problem?

I mean, if the amount of utility gained from saving x amount of people is no longer allowed to increase boundlessly, you don't need such improbable leverage penalties. You'd still of course have the property that it's better to save more people, just not linearly better.

Comment author: Eliezer_Yudkowsky 06 May 2013 02:28:37AM 7 points [-]

I find that unsatisfactory for the following reasons - first, I am a great believer in life and love without bound; second, I suspect that the number of people in the multiverse is already great enough to max out that sort of asymptote and yet I still care; third, if this number is not already maxed out, I find it counterintuitive that someone another universe over could cause me to experience preference reversals in this universe by manipulating the number of people who already exist inside a box.

Comment author: lfghjkl 07 May 2013 11:05:28PM *  2 points [-]

Ok, I might have formulated myself badly. My argument is that any agent of bounded computational power is forced to use two utility functions. The one they wish they had (i.e. the unbounded linear version) and the one they are forced to use in their calculations because of their limitations (i.e. an asymptotically bounded approximation).

For those agents capable of self-modification, just add a clause to increase their computational power (and thereby increasing the bound of their approximation) whenever the utilities of the "scales they're working on" differ by more than some small specified number.

So, my answer to this person would be "stick around until I can safely modify myself into dealing with your request", or alternatively, if he wants an answer right now after seeing his evidence, "here's 5 dollars".

Comment author: Pentashagon 07 May 2013 07:37:13PM -1 points [-]

Why can't you increase your asymptote with new evidence? If, for instance, your utility was bounded at 2^160 utilons before the mugger opened the sky then just increase your bound according to that evidence and then shut up and multiply to decide whether to pay $5. You can't update to a bound of 3^^^3 in one step since you can't receive enough evidence at once, which is a handy feature for avoiding muggings, but your utility at a distant point in the future is essentially unbounded given enough evidential updates over time.

Useful utility bounds should be derivable from our knowledge of the universe. If we can theoretically create 10^80 unique, just-worth-living lives with the estimated matter and energy in the universe then that provides a minimum bound, although it's probably desirable to choose the bound large enough that the 10^80th life is worth nearly as much as the 1st or 10^11th life. When we have evidence for a change in our estimate of the available matter and energy or a change in the efficiency of turning matter and energy into utility we scale the bound appropriately.