Which part(s) don't you understand?
It is possible that we are using different unstated assumptions. Do you agree with these assumptions:
1) An uploaded copy running in a simulation is Turing-complete (As JoshuaZ points out, the copy should also be Turing-equivalent). Because of this, state t_n+1 of a given simulation can be determined by the value of t_n and value of the input D_n at that state. (The sequence D is not random so I can always calculate the value of D_n. In the easiest case D_n=0 for all values of n.) Similarly, if I have multiple copies of the simulation at the same state t_n and all of them have the same input D_n, they should all have the same value for t_n+1. In the top level post, having multiple identical copies means that they all start at the same state t_0 and are passed in the same inputs D_0, D_1, etc as they run in order to force them to remain identical. Because no new information is gained as we run the simulation, the entropy (and thus the value) remains the same no matter how many copies are being run.
2)For examples (a) and (b) you are talking about replacing the input sequence D with a random number generator R. The value of t_1 depends on t_0 and the output of R. Since R is no longer predictable, there is information being added at each stage. This means the entropy of this new simulation depends on the entropy of R
1) An uploaded copy running in a simulation is Turing-complete
That is not what Turing complete means. Roughly speaking, something is Turing complete if it can simulate any valid Turing machine. What you are talking about is simply that the state change in question is determined by input data and state. This says nothing about Turing completness of the class of simulations, or even whether the class of simulations can be simulated on Turing machines.. For example, if the physical laws of the universe actually require real numbers then you might need a Blum-Shub-Smale machine to model the simulation.
In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).