In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).
I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
* E. T. Jaynes says we can't do inference in infinite sets except those that are defined as well-behaved limits of finite sets, but if we're living in an infinite set, then there has to be some right answer, and some best method of approximating it. I have no idea what that method is.
So. My moral intuition says that creating an identical non-interacting copy of me, with no need for or possibility of it serving as a backup, is valued at 0. As for consequentialism... if this were valued even slightly, I'd get one of those quantum random number generator dongles, have it generate my desktop wallpaper every few seconds (thereby constantly creating zillions of new slightly-different versions of my brain in their own Everett branches), and start raking in utilons. Considering that this seems not just emotionally neutral but useless to me, my consequentialism seems to agree with my emotivist intuition.
If this is in some sense true, then we have an infinite ethics problem of awesome magnitude.
Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.