In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).
I'm not sure. The first really big thing that jumped out at me was the total separateness issue. The details of how this is implemented would matter to me and probably change my opinion in dramatic ways. I can imagine various ways to implement a copy (physical copy in "another dimension", physical copy "very far away", with full environmental detail similarly copied out to X kilometers and the rest simulated or changed, with myself as an isolated boltzman brain, etc, etc). Some of them might be good, some might be bad, and some might require informed consent from a large number of people.
For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy", but my preference for it grows from the interactions I expect to happen far away in time and space. Such copying basically amounts to "colonization of space" and seems like an enormously good thing from that perspective.
I think simulationist metaphysics grows out of intuitions from dreaming (where our brain probably literally implements something like a or content tag so that we don't become confused by memories of our dreams), programming (where simulations happen in RAM that we can "miraculously edit", thereby copying and/or changing the course of the simulation), and mathematics (where we get a sense of data structures "Platonically existing" before we construct a definition, which our definitions "find", so that we can explore the implications and properties of the hypothetical object).
Its very easy to get these inspirational sources confused, mix them together some, and they talk about "making a copy", and then have the illusion of mutual understanding with someone else.
For example, I expect that all possible realities already "exist" in platospace. Tunnels between realities can be constructed, but anything that connects to our reality is likely (due to thermodynamic and information theoretic concerns) to be directional. We can spend energy to embed other realities within our own as simulations. In theory, we might be embedded in larger contexts without ever being aware of the fact. Embedding something that embeds yourself is cute, but not computationally realistic, implying either directional "compression" or radically magical metaphysics.
Perhaps a context in which we are embedded might edit our universe state and explore counter-factual simulations, but even if our simulators did that, an unedited version of our universe would still continue on within platospace, as would all possible "edit and continuations" that our supposed simulators did not explore via simulation as an embedding within their own context.
But however much fun it is to think about angels on the head of a pin, all such speculation seems like an abuse of the predicate of "existence" to me. I might use the word "existence" when thinking of "platonic existence" but it is a very different logical predicate than the word that's used when I ponder "whether $100 exists in my purse".
Possible spoiler with rot13'ed amazon link:
Fbzrgvzrf V guvax znlor gurer fubhyq or n fhosbehz sbe crbcyr jub unir nyernql ernq Crezhgngvba Pvgl fb gung pbairefngvbaf pna nffhzr pregnva funerq ibpnohynel jvgubhg jbeelvat nobhg fcbvyref :-C
uggc://jjj.nznmba.pbz/Crezhgngvba-Pvgl-Tert-Rtna/qc/006105481K
No, that's not non-interacting, because as you say later, you want to interact with it. I mean really strictly noninteracing: no information flow either way. Imagine it's over the cosmic horizon.