eli_sennesh comments on Open thread, Dec. 15 - Dec. 21, 2014 - Less Wrong

2 Post author: Gondolinian 15 December 2014 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (309)

You are viewing a single comment's thread.

Comment author: [deleted] 17 December 2014 01:36:44PM 3 points [-]

Thought: I think Pascal's Mugging can't harm boundedly rational agents. If an agent is bounded in its computing power, then what it ought to do is draw some bounded number of samples from its mixture model of possible worlds, and then evaluate the expected value of its actions in the sample rather than across the entire mixture. As the available computing power approaches infinity, the sample size approaches infinity, and the sample more closely resembles the true distribution, thus causing the expected utility calculation to approach the true expected utility across the infinite ensemble of possible worlds. But, as long as we employ a finite sample, the more-probable worlds are so overwhelmingly more likely to be sampled that the boundedly rational agent will never waste its finite computing power on Pascal's Muggings: it will spend more computing power examining the possibility that it has spontaneously come into existence as a consequence of an Infinite Improbability Drive being ignited in its near vicinity than on true Muggings.

Comment author: DanielLC 19 December 2014 06:33:40AM 1 point [-]

There are other ways of taking Pascal's mugging into account. You shouldn't do that based on lack of computing power. And if you aren't doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?

Comment author: [deleted] 19 December 2014 08:52:24PM *  0 points [-]

You shouldn't do that based on lack of computing power. And if you aren't doing it based on lack of computing power, why involve randomness at all?

Well, it's partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It's also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.

So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).

Comment author: bogus 20 December 2014 12:18:29PM *  0 points [-]

Importance sampling wouldn't have you ignore Pascal's Muggings, though. At its most basic, 'sampling' is just a way of probabilistically computing an integral.

Comment author: [deleted] 20 December 2014 05:31:31PM 0 points [-]

Importance sampling wouldn't have you ignore Pascal's Muggings, though.

Well, they shouldn't be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.