You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lackofcheese comments on Simulation argument meets decision theory - Less Wrong Discussion

14 Post author: pallas 24 September 2014 10:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread.

Comment author: lackofcheese 24 September 2014 04:33:58PM *  1 point [-]

I think the issue may be that the "egoistic" utility functions are incoherent in this context, because you're actually trying to compare the utility functions of two different agents as if they were one.

Let's say, for example, that X is a paperclip maximiser who gets either 10 paperclips or 9 paperclips, and each X* is a human who either saves 2 million lives or 1 million lives.

If you don't know whether you're X or X*, how can you compare 10 paperclips to 2 million lives?