Mass_Driver comments on Poll: What value extra copies? - Less Wrong

5 [deleted] 22 June 2010 12:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mass_Driver 22 June 2010 02:54:07PM 1 point [-]

I would spend one day's hard labor (8-12 hours) to create one copy of me, just because I'm uncertain enough about how the multiverse works that having an extra copy would be vaguely reassuring. I might do another couple of hours on another day for copy #3. After that I think I'm done.

Comment author: Jonathan_Graehl 22 June 2010 05:38:32PM *  3 points [-]

I'm interested, but suspicious of fraud - how do I know the copy really exists?

Also, it seems like as posed, my copies will live in identical universes and have identical futures as well as present state - i.e. I'm making an exact copy of everyone and everything else as well. If that's the offer, then I'd need more information about the implications of universe cloning. If there are none, then the question seems like nonsense to me.

I was only initially interested at the thought of my copies diverging, even without interaction (I suppose MWI implies this is what goes on behind the scenes all the time).

Comment author: DanArmak 22 June 2010 06:32:05PM 0 points [-]

If the other universe(s) are simulated inside our own, then there may be relevant differences between the simulating universe and the simulated ones.

In particular, how do we create universes identical to the 'master copy'? The easiest way is to observe our universe, and run the simulations a second behind, reproducing whatever we observe. That would mean decisions in our universe control events in the simulated worlds, so they have different weights under some decision theories.

Comment author: Jonathan_Graehl 24 June 2010 09:11:39PM *  0 points [-]

I assumed we couldn't observe our copies, because if we could, then they'd be observing them too. In other words, somebody's experience of observing a copy would have to be fake - just a view of their present reality and not of a distinct copy.

This all follows from the setup, where there can be no difference between a copy (+ its environment) and the original. It's hard to think about what value that has.

Comment author: DanArmak 22 June 2010 06:29:53PM *  0 points [-]

If you're uncertain about how the universe works, why do you think that creating a clone is more likely to help you than to harm you?

Comment author: orthonormal 22 June 2010 10:32:23PM 2 points [-]

I assume Mass Driver is uncertain between certain specifiable classes of "ways the multiverse could work" (with some probability left for "none of the above"), and that in the majority of the classified hypotheses, having a copy either helps you or doesn't hurt.

Thus on balance, they should expect positive expected value, even considering that some of the "none of the above" possibilities might be harmful to copying.

Comment author: DanArmak 22 June 2010 11:36:36PM 0 points [-]

I understand that that's what Mass_Driver is saying. I'm asking, why think that?

Comment author: orthonormal 23 June 2010 02:27:43AM *  2 points [-]

Because scenarios where having an extra copy hurts seem... engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.

I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to "none of the above"), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.

Comment author: Mass_Driver 23 June 2010 04:54:44AM 1 point [-]

I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it's probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.

Comment author: wedrifid 23 June 2010 05:57:07AM 1 point [-]

You could be right about the LW tendency to err... but this thread isn't the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.

Comment author: DanArmak 23 June 2010 07:50:40AM 0 points [-]

Well, short of having a deity reward those who copy themselves with extra afterlife, I'm having difficulty imagining how creating non-interacting identical copies could help, either.

The problem with the availability heuristic here isn't so much that it's not a formal logical proof. It's that it fails to convince me, because I don't happen to have the same intuition about it, which is why we're having this conversation in the first place.

I don't see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.

Comment author: orthonormal 24 June 2010 05:45:33AM 0 points [-]

Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there's that.

Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I'd be interested to hear that as well.