Mitchell_Porter comments on The Anthropic Trilemma - Less Wrong

24 Post author: Eliezer_Yudkowsky 27 September 2009 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread.

Comment author: Mitchell_Porter 27 September 2009 05:42:49AM *  4 points [-]

Let's explore this scenario in computational rather than quantum language.

Suppose a computer with infinite working memory, running a virtual world with a billion inhabitants, each of whom has a private computational workspace consisting of an infinite subset of total memory.

The computer is going to run an unusual sort of 'lottery' in which a billion copies of the virtual world are created, and in each one, a different inhabitant gets to be the lottery winner. So already the total population after the lottery is not a billion, it's a billion billion, spread across a billion worlds.

Virtual Yu'el perceives that he could utilize his workspace as described by Eliezer: pause himself, then have a single copy restored from backup if he didn't win the lottery, but have a trillion copies made if he did. So first he wonders whether it's correct to see this as making his victory in the lottery all but certain. Then he notices that if after winning he then does a merge, the certain victory turns back into certain loss, and becomes really worried about the fundamental soundness of his decision procedures and understanding of probability, etc.

Stating the scenario in these concrete terms brings out, for me, aspects that aren't so obvious in the original statement. For example: If everyone else has the same option (the trillionfold copying), Yu'el is no longer favored. Is the trilemma partly due to supposing that only one lottery participant has this radical existential option? Also, it seems important to keep the other worlds where Yu'el loses in sight. By focusing on that one special world, where we go from a billion people, to a trillion people, mostly Yu'els, and then back to a billion, we are not even thinking about the full population elsewhere.

I think a lot of the assumptions going into this thought experiment as originally proposed are simply wrong. But there might be a watered-down version involving copies of decision-making programs on a single big computer, etc, to which I could not object. The question for me is how much of the impression of paradox will remain after the problem has been diluted in this fashion.