You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on DRAFT:Ethical Zombies - A Post On Reality-Fluid - Less Wrong Discussion

0 Post author: MugaSofer 09 January 2013 01:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 19 January 2013 12:56:50AM 3 points [-]

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

You may eventually obtain the capacity to perfectly simulate yourself, in which case you'll run into similar issues. I used Omega in a scenario a couple of years ago that's somewhat similar to the OP's, but really Omega is just a shortcut for establishing a "clean" scenario that's relatively free of distractions so we can concentrate on one specific problem at a time. There is a danger of using Omega to construct scenarios that have no real-world relevance, and that's something that we should keep in mind, but I think it's not the case in the examples you gave.