In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).
What about using compressibility as a way of determining the value of the set of copies?
In computer science, there is a concept known as deduplication (http://en.wikipedia.org/wiki/Data_deduplication) which is related to determining the value of copies of data. Normally, if you have 100MB of uncompressable data (e.g. an image or an upload of a human), it will take up 100MB on a disk. If make a copy of that file, a standard computer system will require a total of 200MB to track both files on disk. A smart system that uses deduplication will see that they are the same file and discard the redundant data so that only 100MB is actually required. However, this is done transparently so the user will see two files and think that there is 200MB of data. This can be done with N copies and the user will think there is N*100MB of data, but the file system is smart enough to only use up 100MB of disk space as long as no one modifies the files.
For the case of an upload, you have N copies of a human of X MB each which will only require X MB on the disk even though the end user sees N*X MB of data being processed. As long as the simulations never diverge, the file system will never use more that X MB of data. As long as the copies never diverge, running N copies of an upload should never take up more than X MB of space (though they will take up more time since the each process is still being run).
In the case where the copies /do/ diverge, you can use COW optimization (http://en.wikipedia.org/wiki/Copy_on_write) to determine the amount of resources used. In the first example, if you change the first 1MB of one of the two 100MB files but leave the rest untouched, a smart computer will only use 101MB of disk space. It will use up 99MB for the shared data, 1MB for the first file's unique data, and 1MB for the second file's unique data. So in this case, the resources for the two copies is 1% more than the resources used for the single copy.
From a purely theoretical perspective deduplication and COW will give you an efficiency equivalent to what you would get if you tried to compress an upload or a bunch of uploads. (In practice it depends on the type of data) So value of N copies is equal to the Shannon entropy (alternatively, you probably could use the Komogorov complexity) of the data that is the same in both copies plus the unique data in each copy. I figure that any supercomputer designed to run multiple copies of an upload would use these types of compression by default since all modern high end file storage systems use dedup and COW to save on costs.
Note that this calculation of value is different from if you make a backup of youself to guard against disaster. In the case of a backup, you would normally run the second copy in a more isolated environment from the first that would make deduplication impossible. E.g. you would have one upload running in California and another running in Australia. That way if the computer in California falls into the ocean, you still have a working copy in Australia. This this case, the value of the two copies is greater than the value of just one copy because the second copy adds a measure of redundancy even though it adds no new information.
P.S. While we're on the topic, this is a good time you backup your own computer if you haven't done so recently. If your hard drive crashes, then you will fully comprehend the value of a copy :)
Note that Wei Dai also had this idea.