Manfred comments on Pascal's Mugging for bounded utility functions - Less Wrong

8 Post author: Benja 06 December 2012 10:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 06 December 2012 11:50:58PM 4 points [-]

If not, is there some fairly compelling reason to believe that it is true anyway?

The information required to describe your body is about an exabyte. Once you have a simulated body, getting answers out is trivial, so we'll call an exabyte an upper limit on what information you could tell someone. 10^18 ish. This means that if you have a utility function, you aren't able to imagine situations complicated enough to have a simplicity prior below 10^-10^18. That is, one part in 1 followed by 10^18 zeroes.

So, in Knuth up arrow notation, how big a reward do I need to promise someone to totally overwhelm the admittedly large resolution offered by our prior probabilities? Let's do an example with tens. 10^10^18 looks a lot like 10^10^10, which is 10^^3. What happens if we go up to 10^^4? Then it's 10^10^10^10, or 10^(10^(10^10)), or 10^(10^^3). That is, it's ten to the number that we were just considering. So just by incrementing this second-exponent, we've raised our answer to an exponent. So offering rewards big enough to totally wash out our prior resolution turns out to be easy.

You could have your utility function only count future events

That runs into problems - like you'd dump toxic waste in your house as long as you only got sick far in the future.

Comment author: AlexMennen 07 December 2012 01:00:16AM *  1 point [-]

The information required to describe your body is about an exabyte. Once you have a simulated body, getting answers out is trivial, so we'll call an exabyte an upper limit on what information you could tell someone. 10^18 ish. This means that if you have a utility function, you aren't able to imagine situations complicated enough to have a simplicity prior below 10^-10^18. That is, one part in 1 followed by 10^18 zeroes.

Hm, that's an interesting point. On the other hand, "Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who can't have a symmetrical effect on this one person, the prior probability would be penalized by a factor on the same order as the utility." (source: LW wiki; I couldn't find where Robin actually said that)

In other words, you can represent the hypothesis with so little information because you can cheat by referring to yourself with a small amount of information, no matter how much information it would take to specify you objectively.

That runs into problems - like you'd dump toxic waste in your house as long as you only got sick far in the future.

Why?

Comment author: CarlShulman 07 December 2012 02:00:28AM *  1 point [-]

Robin's argument relies on infinite certainty in a particular view of anthropic questions. It penalizes the probability significantly, but doesn't on its own defeat infinity concerns.

Comment author: paulfchristiano 07 December 2012 05:02:15AM *  2 points [-]

If you use EDT, then Robin's argument cashes out as: "if there are 3^^^^3 people, then the effects of my decisions via the typical copies of me are multiplied up by O(3^^^^3), while the effects of my decisions via the lottery winner aren't." So then the effects balance out, and you are down to the same reasoning as if you accepted the anthropic argument. But now you get a similar conclusion even if you assign 1% probability to "I have no idea what's going on re: anthropic reasoning."

Do you think that works?

(Infinity still gets you into trouble with divergent sums, but this seems to work fine if you have a finite but large cap on the value of the universe.)

Coincidentally I just posted on this without having seen the OP.

Comment author: CarlShulman 07 December 2012 12:50:48PM *  0 points [-]

Yes, but then you're acting on probabilities of ludicrous utilities again, an empirical "stabilizing assumption" in Bostrom's language.

Comment author: Manfred 07 December 2012 02:12:36PM *  0 points [-]

That runs into problems - like you'd dump toxic waste in your house as long as you only got sick far in the future.

Why?

Say that living 50 more years without getting sick was 90 utilons, and the maximum score was 100. This means that there are only 10 utilons with which to describe the quality of your life between 50 years from now and the far future - being healthy 51 years from now is worth only 1/10 as being healthy now. So for each day you can use as you wish this year, you'd be willing to spend 10 days bedridden, or doing boring work, or in jail 50 years from now.

So in a word, procrastination. And because the utility function is actually shifting over time so that it stays 100-points-max, each point in time looks the same - there's no point where they'd stop procrastinating, once they started, unless the rate of work piling up changed.

Comment author: AlexMennen 07 December 2012 05:14:38PM 1 point [-]

That's a problem with any sort of discounting, but only counting future events in your utility function does not change that. It doesn't matter whether the next 50 can get you 90 out of 100 available future utils or .09 out of .1 available future utils (where the other 99.9 were determined in the past); your behavior will be the same.

Comment author: Manfred 07 December 2012 06:34:50PM *  0 points [-]

I agree for the typical implementation of discounting - though if someone just had a utility function that got non-exponentially smaller as the numbers on the calendar got bigger, you could see some different behavior.

Comment author: AlexMennen 07 December 2012 07:23:22PM *  0 points [-]

Hm, you're right. For nonexponential discounting, future!you discounts differently than you want it to if it resets its utility, but not if it doesn't.

Comment author: Decius 08 December 2012 02:23:06AM *  0 points [-]

An exabyte? Really? 8e18 bits?

(Most values to one sig fig)

Estimate 4e28 total nucleons in the human body (60Kg of nucleons) it takes about 100 bits to describe the number of nucleons. each nucleon takes about two bits of information to specify type (proton, neutron, antiproton, antineutron) Figure that a human is about 6'x2'x1'; that's 1e35x4e34x2e34 plank units. With 8e108 unique locations within that rectangle, each nucleon needs 362 bits to describe location.

Without discussing momentum, it already takes 10^41 bits to describe the location of every nucleon. Any benefit you gain from the distribution of matter not being uniform you lose when describing what the distribution of matter is in a human.

With a discussion of momentum, there is no way to describe an arbitrary nucleon in fixed space, since there is no upper limit to the amount of momentum of a nucleon.

How did you estimate an exabyte for the information content of a human?

Comment author: Manfred 08 December 2012 12:29:34PM 2 points [-]

How did you estimate an exabyte for the information content of a human?

Googled it. If I had to guess why the number I found was so much smaller, it's probably because they had a scheme more like "for each molecule, describe it and place it with precision much better than its thermal vibration," and maybe a few bits to describe temperature.

But yes, even 10^your number of bits will be smaller than 10^^5. Even if we tracked each possible quantum state that's localized in my body, which would be exponentially more intensive than tracking individual particles, that just means we might have to (but probably not) bump up to a reward of 10^^6 in order to swamp my prior probabilities.

Comment author: Kindly 08 December 2012 03:21:55AM 2 points [-]

Whatever the information content is, unless you have an argument that it's actually infinite, it's going to be smaller than 3^^^^3.

Comment author: Decius 08 December 2012 11:51:00PM 0 points [-]

Describe momentum in fixed space. It is acceptable to have an upper limit on momentum, provided that that limit can be arbitrarily high. It is not acceptable for two different vectors to be described differently.

I'm not sure what the smallest unit of angular measure is, or if it is continuous. If it is continuous, there's no way to describe momentum in finite bits.