You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging - Less Wrong Discussion

20 Post author: Kaj_Sotala 16 September 2015 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 18 September 2015 10:16:45AM 0 points [-]

This has nothing to do with bounded utility. Bounded utility means you don't care about any utilities above a certain large amount. Like if you care about saving lives, and you save 1,000 lives, after that you just stop caring. No amount of lives after that matters at all.

This solution allows for unbounded utility. Because you can always care about saving more lives. You just won't take bets that could save huge numbers of lives, but have very very small probabilities.

Comment author: entirelyuseless 18 September 2015 07:57:03PM 1 point [-]

This isn't what I meant by bounded utility. I explained that in another comment. It refers to utility as a real number and simply sets a limit on that number. It does not mean that at any point "you just stop caring."

Comment author: Houshalter 19 September 2015 03:53:03AM 0 points [-]

If your utility has a limit, then you can't care about anything past that limit. Even a continuous limit doesn't work, because you care less and less about obtaining more utility, as you get closer to it. You would take a 50% chance at saving 2 people the same as a guaranteed chance at saving 1 person. But not a 50% chance at saving 2,000 people, over a chance at saving 1,000.

Comment author: entirelyuseless 19 September 2015 11:45:40AM 1 point [-]

Yes, that would be the effect in general, that you would be less willing to take chances when the numbers involved are higher. That's why you wouldn't get mugged.

But that still doesn't mean that "you don't care." You still prefer saving 2,000 lives to saving 1,000, whenever the chances are equal; your preference for the two cases does not suddenly become equal, as you originally said.

Comment author: Houshalter 19 September 2015 01:12:01PM *  -1 points [-]

If utility is strictly bounded, then you do literally not care about saving 1,000 lives or 2,000.

You can fix that with asymptote. Then you do have a preference for 2,000. But the preference is only very slight. You wouldn't take a 1% risk of losing 1,000 people, to save 2,000 people otherwise. Even though the risk is very small and the gain is very huge.

So it does fix Pascal's mugging, but causes a whole new class of issues.

Comment author: entirelyuseless 19 September 2015 07:11:57PM 0 points [-]

Your understanding of "strictly bounded" is artificial, and not what I was talking about. I was talking about assigning a strict, numerical bound to utility. That does not prevent having an infinite number of values underneath that bound.

It would be silly to assign a bound and a function low enough that "You wouldn't take a 1% risk of losing 1,000 people, to save 2,000 people otherwise," if you meant this literally, with these values.

But it is easy enough to assign a bound and a function that result in the choices we actually make in terms of real world values. It is true that if you increase the values enough, something like that will happen. And that is exactly the way real people would behave, as well.

Comment author: Houshalter 20 September 2015 02:30:26AM 0 points [-]

Your understanding of "strictly bounded" is artificial, and not what I was talking about. I was talking about assigning a strict, numerical bound to utility. That does not prevent having an infinite number of values underneath that bound.

Isn't that the same as an asymptote, which I talked about?

It would be silly to assign a bound and a function low enough that "You wouldn't take a 1% risk of losing 1,000 people, to save 2,000 people otherwise," if you meant this literally, with these values.

You can set the bound wherever you want. It's arbitrary. My point is that if you ever reach it, you start behaving weird. It is not a very natural fix. It creates other issues.

It is true that if you increase the values enough, something like that will happen. And that is exactly the way real people would behave, as well.

Maybe human utility functions are bounded. Maybe they aren't. We don't know for sure. Assuming they are is a big risk. And even if they are bounded, it doesn't mean we should put that into an AI. If, somehow, it ever runs into a situation where it can help 3^^^3 people, it really should.

Comment author: entirelyuseless 20 September 2015 12:23:34PM 0 points [-]

"If, somehow, it ever runs into a situation where it can help 3^^^3 people, it really should."

I thought the whole idea behind this proposal was that the probability of this happening is essentially zero.

If you think this is something with a reasonable probability, you should accept the mugging.

Comment author: Houshalter 21 September 2015 09:40:47AM 0 points [-]

You were speaking about bounded utility functions. Not bounded probability functions.

The whole point of the Pascal's mugger scenario is that these scenarios aren't impossible. Solomonoff induction halves the probability of each hypothesis based on how many additional bits it takes to describe. This means the probability of different models decreases fairly rapidly. But not as rapidly as functions like 3^^^3 grow. So there are hypotheses that describe things that are 3^^^3 units large in much fewer than log(3^^^3) bits.

So the utility of hypotheses can grow much faster than their probability shrinks.

If you think this is something with a reasonable probability, you should accept the mugging.

Well the probability isn't reasonable. It's just not as unreasonably small as 3^^^3 is big.

But yes you could bite the bullet and say that the expected utility is so big, it doesn't matter what the probability is, and pay the mugger.

The problem is, expected utility doesn't even converge. There is a hypothesis that paying the mugger saves 3^^^3 lives. And there's an even more unlikely hypothesis that not paying him will save 3^^^^3 lives. And an even more complicated hypothesis that he will really save 3^^^^^3 lives. Etc. The expected utility of every action grows to infinity, and never converges on any finite value. More and more unlikely hypotheses totally dominate the calculation.

Comment author: Jiro 22 September 2015 03:49:44PM 1 point [-]

Solomonoff induction halves the probability of each hypothesis based on how many additional bits it takes to describe.

See, I told everyone that people here say this.

Fake muggings with large numbers are more profitable to the mugger than fake muggings with small numbers because the fake mugging with the larger number is more likely to convince a naive rationalist. And the profitability depends on the size of the number, not the number of bits in the number. Which makes the likelihood of a large number being fake grow faster than the number of bits in the number.

Comment author: gjm 21 September 2015 10:40:58AM 1 point [-]

So there are hypotheses that describe things that are 3^^^3 units large in much fewer than log(3^^^3) bits.

So the utility of hypotheses can grow much faster than their probability shrinks.

If utility is straightforwardly additive, yes. But perhaps it isn't. Imagine two possible worlds. In one, there are a billion copies of our planet and its population, all somehow leading exactly the same lives. In another, there are a billion planets like ours, with different people on them. Now someone proposes to blow up one of the planets. I find that I feel less awful about this in the first case than the second (though of course either is awful) because what's being lost from the universe is something of which we have a billion copies anyway. If we stipulate that the destruction of the planet is instantaneous and painless, and that the people really are living exactly identical lives on each planet, then actually I'm not sure I care very much that one planet is gone. (But my feelings about this fluctuate.)

A world with 3^^^3 inhabitants that's described by (say) no more than a billion bits seems a little like the first of those hypothetical worlds.

I'm not very sure about this. For instance, perhaps the description would take the form: "Seed a good random number generator as follows. [...] Now use it to generate 3^^^3 person-like agents in a deterministic universe with such-and-such laws. Now run it for 20 years." and maybe you can get 3^^^3 genuinely non-redundant lives that way. But 3^^^3 is a very large number, and I'm not even quite sure there's such a thing as 3^^^3 genuinely non-redundant lives even in principle.

Comment author: entirelyuseless 21 September 2015 12:42:22PM 0 points [-]

Bounded utility functions effectively give "bounded probability functions," in the sense that you (more or less) stop caring about things with very low probability.

For example, if my maximum utility is 1,000, then my maximum utility for something with a probability of one in a billion is .0000001, an extremely small utiliity, so something that I will care about very little. The probability of of the 3^^^3 scenarios may be more than one in 3^^^3. But it will still be small enough that a bounded utility function won't care about situations like that, at least not to any significant extent.

That is precisely the reason that it will do the things you object to, if that situation comes up.

That is no different from pointing out that the post's proposal will reject a "mugging" even when it will actually cost 3^^^3 lives.

Both proposals have that particular downside. That is not something peculiar to mine.