You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Qiaochu_Yuan comments on DRAFT:Ethical Zombies - A Post On Reality-Fluid - Less Wrong Discussion

0 Post author: MugaSofer 09 January 2013 01:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: Qiaochu_Yuan 09 January 2013 09:21:28PM *  2 points [-]

(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)

I don't currently accept the validity of this kind of anthropic reasoning (actually I am confused about anthropic reasoning in general). Is there an LW post where it is thoroughly defended?

Comment author: Vladimir_Nesov 09 January 2013 09:49:19PM *  3 points [-]

Anthropic reasoning not working or not making sense in many cases is closer to being a standard position on LW (for example). The standard trick for making anthropic problems less confusing is to pose them as decision problems instead of as problems about probabilities. This way, when there appears to be no natural way of assigning probabilities (to instances of an agent) that's useful for understanding the situation, we are not forced to endlessly debate which way of assigning them anyway is "the right one".

Comment author: MugaSofer 10 January 2013 09:50:20AM *  -2 points [-]

anthropic reasoning

You keep using that word. I don't think it means what you think it means.

Seriously, though, what do you think the flaw in the argument is, as presented in your quote?

Comment author: Qiaochu_Yuan 11 January 2013 12:29:01AM *  2 points [-]

I think I'm using "anthropic" in a way consistent with the end of the first paragraph of Fundamentals of kicking anthropic butt (to refer to situations in which agents get duplicated and/or there is some uncertainty about what agent an agent is). If there's a more appropriate word then I'd appreciate knowing what it is.

My first objection is already contained in Vladimir_Nesov's comment: it seems like in general anthropic problems should be phrased entirely as decision problems and not as problems involving the assignment of odds. For example, Sleeping Beauty can be turned into two decision problems: one in which Sleeping Beauty is trying to maximize the expected number of times she is right about the coin flip, and one in which Sleeping Beauty is trying to maximize the probability that she is right about the coin flip. In the first case, Sleeping Beauty's optimal strategy is to guess tails, whereas in the second case it doesn't matter what she guesses. In a problem where there's no anthropic funniness, there's no difference between trying to maximize the expected number of times you're right and trying to maximize the probability that you're right, but with anthropic funniness there is.

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

Comment author: Wei_Dai 19 January 2013 12:56:50AM 3 points [-]

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

You may eventually obtain the capacity to perfectly simulate yourself, in which case you'll run into similar issues. I used Omega in a scenario a couple of years ago that's somewhat similar to the OP's, but really Omega is just a shortcut for establishing a "clean" scenario that's relatively free of distractions so we can concentrate on one specific problem at a time. There is a danger of using Omega to construct scenarios that have no real-world relevance, and that's something that we should keep in mind, but I think it's not the case in the examples you gave.

Comment author: ESRogs 11 January 2013 02:35:30AM 2 points [-]

How would you characterize your issue with Pascal's mugging? The dilemma is not supposed to require being convinced of the truth of the proposition, just assigning it a non-zero probability.

Comment author: Qiaochu_Yuan 11 January 2013 02:55:43AM 3 points [-]

Hmm. You're right. Upon reflection, I don't have a coherent rejection of Pascal's mugging yet.

Comment author: ESRogs 11 January 2013 04:13:22AM 1 point [-]

Gotcha. Your posts have seemed pretty thoughtful so far so I was surprised by / curious about that comment. :)

Comment author: OrphanWilde 09 January 2013 09:51:30PM -2 points [-]

If it helps you avoid fighting the hypothetical, Omega already knows what her answer will be, and has already acted on it.