In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).
You don't grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a "universal log program", for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn't take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don't take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you're saying that A, a human in 'the real world', acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the 'output log' of each depends on the 'Platonic' result of a common computation - in this case the computation where A's brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the log... (read more)