CarlShulman comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 06 May 2013 07:22:18PM 2 points [-]

The problem is that you seem to be introducing one dubious piece to deal with another. Why is the hypothesis that those bullet points hold infinitesimally unlikely rather than very unlikely in the first place?

Comment author: Eliezer_Yudkowsky 06 May 2013 07:25:25PM 0 points [-]

I think the bullet points as a whole are "very unlikely" (the universe as a whole has some Kolmogorov complexity, or equivalent complexity of logical axioms, which determines this); within that universe your being one of the non-hypercomputed sentients is infinitesimally unlikely, and then there's a vast update when you don't see the tag. How would you reason in this situation?

Comment author: CarlShulman 06 May 2013 08:45:23PM *  3 points [-]

OK, but if you're willing to buy all that, then the expected payoff in some kind of stuff for almost any action (setting aside opportunity costs and empirical stabilizing assumptions) is also going to be cosmically large, since you have some prior probability on conditions like those in the bullet pointed list blocking the leverage considerations.

Comment author: Eliezer_Yudkowsky 06 May 2013 09:38:43PM 2 points [-]

Hm. That does sound like a problem. I hadn't considered the problem of finite axioms giving you unboundedly large likelihood ratios over your exact situation. It seems like this ought to violate the Hansonian principle somehow but I'm not sure to articulate it...

Maybe not seeing the tag updates against the probability that you're in a universe where non-tags are such a tiny fraction of existence, but this sounds like it also ought to replicate Doomsday type arguments and such? Hm.

Comment author: CarlShulman 06 May 2013 09:50:20PM *  4 points [-]

I hadn't considered the problem of finite axioms giving you unboundedly large likelihood ratios over your exact situation.

Really? People have been raising this (worlds with big payoffs and in which your observations are not correspondingly common) from the very beginning. E.g. in the comments of your original Pascal's Mugging post in 2007, Michael Vassar raised the point:

The guy with the button could threaten to make an extra-planar factory farm containing 3^^^^^3 pigs instead of killing 3^^^^3 humans. If utilities are additive, that would be worse.

and you replied:

Congratulations, you made my brain asplode.

Wei Dai and Rolf Nelson discussed the issue further in the comments there, and from different angles. And it is the obvious pattern-completion for "this argument gives me nigh-infinite certainty given its assumptions---now do I have nigh-infinite certainty in the assumptions?" i.e. Probing the Improbable issues. This is how I explained the unbounded payoffs issue to Steven Kaas when he asked for feedback on earlier drafts of his recent post about expected value and extreme payoffs (note how he talks about our uncertainty re anthropics and the other conditions required for Hanson's anthropic argument to go through).

Comment author: CarlShulman 07 May 2013 12:18:13AM *  1 point [-]

It seems like this ought to violate the Hansonian principle somehow but I'm not sure to articulate it...

Hanson endorses SIA. So he would multiply the possible worlds by the number of copies of his observations therein. A world with 3^^^3 copies of him would get a 3^^^3 anthropic update. A world with only one copy of his observations that can affect 3^^^^3 creatures with different observations would get no such probability boost.

Or if one was a one-boxer on Newcomb one might think of the utility of ordinary payoffs in the first world as multiplied by the 3^^^3 copies who get them.