Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

drnickbone comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 09 May 2013 05:13:02PM 5 points [-]

This whole leverage ratio idea is very obviously an intelligent kludge / patch / work around

I'm not sure that the kludge works anyway, since there are still some "high impact" scenarios which don't get kludged out. Let's imagine the mugger's pitch is as follows. "I am the Lord of the Matrix, and guess what - you're in it! I'm in the process of running a huge number of simulations of human civilization, in series, and in each run of the simulation I am making a very special offer to some carefully selected people within it. If you are prepared to hand over $5 to me, I will kindly prevent one dust speck from entering the eye of one person in each of the next googleplex simulations that I run! Doesn't that sound like a great offer?"

Now, rather naturally, you're going to tell him to get lost. And in the worlds where there really is a Matrix Lord, and he's telling the truth, the approached subjects almost always tell him to get lost as well (the Lord is careful in whom he approaches), which means that googleplexes of preventable dust specks hit googleplexes of eyes. Each rejection of the offer causes a lower total utility than would be obtained from accepting it. And if those worlds have a measure > 1/googleplex, there is on the face of it a net loss in expected utility. More likely, we're just going to get non-convergent expected utilities again.

The general issue is that the causal structure of the hypothetical world is highly linear. A reasonable proportion of nodes (perhaps 1 in a billion) do indeed have the ability to affect a colossal number of other nodes in such a world. So the high utility outcome doesn't get suppressed by a locational penalty.