AlexMennen comments on Pascal's Mugging for bounded utility functions - Less Wrong

8 Post author: Benja 06 December 2012 10:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 07 December 2012 01:00:16AM *  1 point [-]

The information required to describe your body is about an exabyte. Once you have a simulated body, getting answers out is trivial, so we'll call an exabyte an upper limit on what information you could tell someone. 10^18 ish. This means that if you have a utility function, you aren't able to imagine situations complicated enough to have a simplicity prior below 10^-10^18. That is, one part in 1 followed by 10^18 zeroes.

Hm, that's an interesting point. On the other hand, "Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who can't have a symmetrical effect on this one person, the prior probability would be penalized by a factor on the same order as the utility." (source: LW wiki; I couldn't find where Robin actually said that)

In other words, you can represent the hypothesis with so little information because you can cheat by referring to yourself with a small amount of information, no matter how much information it would take to specify you objectively.

That runs into problems - like you'd dump toxic waste in your house as long as you only got sick far in the future.

Why?

Comment author: CarlShulman 07 December 2012 02:00:28AM *  1 point [-]

Robin's argument relies on infinite certainty in a particular view of anthropic questions. It penalizes the probability significantly, but doesn't on its own defeat infinity concerns.

Comment author: paulfchristiano 07 December 2012 05:02:15AM *  2 points [-]

If you use EDT, then Robin's argument cashes out as: "if there are 3^^^^3 people, then the effects of my decisions via the typical copies of me are multiplied up by O(3^^^^3), while the effects of my decisions via the lottery winner aren't." So then the effects balance out, and you are down to the same reasoning as if you accepted the anthropic argument. But now you get a similar conclusion even if you assign 1% probability to "I have no idea what's going on re: anthropic reasoning."

Do you think that works?

(Infinity still gets you into trouble with divergent sums, but this seems to work fine if you have a finite but large cap on the value of the universe.)

Coincidentally I just posted on this without having seen the OP.

Comment author: CarlShulman 07 December 2012 12:50:48PM *  0 points [-]

Yes, but then you're acting on probabilities of ludicrous utilities again, an empirical "stabilizing assumption" in Bostrom's language.

Comment author: Manfred 07 December 2012 02:12:36PM *  0 points [-]

That runs into problems - like you'd dump toxic waste in your house as long as you only got sick far in the future.

Why?

Say that living 50 more years without getting sick was 90 utilons, and the maximum score was 100. This means that there are only 10 utilons with which to describe the quality of your life between 50 years from now and the far future - being healthy 51 years from now is worth only 1/10 as being healthy now. So for each day you can use as you wish this year, you'd be willing to spend 10 days bedridden, or doing boring work, or in jail 50 years from now.

So in a word, procrastination. And because the utility function is actually shifting over time so that it stays 100-points-max, each point in time looks the same - there's no point where they'd stop procrastinating, once they started, unless the rate of work piling up changed.

Comment author: AlexMennen 07 December 2012 05:14:38PM 1 point [-]

That's a problem with any sort of discounting, but only counting future events in your utility function does not change that. It doesn't matter whether the next 50 can get you 90 out of 100 available future utils or .09 out of .1 available future utils (where the other 99.9 were determined in the past); your behavior will be the same.

Comment author: Manfred 07 December 2012 06:34:50PM *  0 points [-]

I agree for the typical implementation of discounting - though if someone just had a utility function that got non-exponentially smaller as the numbers on the calendar got bigger, you could see some different behavior.

Comment author: AlexMennen 07 December 2012 07:23:22PM *  0 points [-]

Hm, you're right. For nonexponential discounting, future!you discounts differently than you want it to if it resets its utility, but not if it doesn't.