Will_Newsome comments on Counterfactual Mugging - Less Wrong

52 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

Sort By: Leading

You are viewing a single comment's thread.

Comment author: Will_Newsome 07 August 2010 12:07:22AM 2 points [-]

If I found myself in this kind of scenario then it would imply that I was very wrong about how I reason about anthropics in an ensemble universe (as with Pascal's mugging or any sort of situation where an agent has enough computing power to take control of that much of my measure such that I find myself in a contrived philosophical experiment). In fact, I would be so surprised to find myself in such a situation that I would question the reasoning that led me to think one boxing was the best course of action in the first place, because somewhere along the way my model became very confused. (I'd still one box, but it would seem less obvious after taking into account the huge amount of previously unexpected structural uncertainty my model of the world suddenly has to deal with.)

Comment author: [deleted] 08 October 2010 04:26:20AM 1 point [-]

If I found myself in this kind of scenario then it would imply that I was very wrong about how I reason about anthropics in an ensemble universe (as with Pascal's mugging or any sort of situation where an agent has enough computing power to take control of that much of my measure such that I find myself in a contrived philosophical experiment).

I see some reasons for this perspective but I'm not sure.

On the one hand, I don't know much about the distribution of agent preferences in an ensemble universe. But there may be enough long towers of nested simulations of agents like us to compensate for this.