sigmaxipi comments on Poll: What value extra copies? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
In your new example, (b) is unrelated to the original question. For (b) a simulation of multiple diverging copies is required in order to create this set of all future yous. However, in your original example, the copies don't statistically diverge.
The entropy of (a) would be the information required to specify you at state t0 + the entropy of a random distribution of input used to generate the set of all possible t1s. In the original example, the simulations of the copies are closed (otherwise you couldn't keep them identical) so the information contained in the single possible t1 cannot be any higher than the information in t0.
Which part(s) don't you understand?
It is possible that we are using different unstated assumptions. Do you agree with these assumptions:
1) An uploaded copy running in a simulation is Turing-complete (As JoshuaZ points out, the copy should also be Turing-equivalent). Because of this, state tn+1 of a given simulation can be determined by the value of tn and value of the input Dn at that state. (The sequence D is not random so I can always calculate the value of Dn. In the easiest case Dn=0 for all values of n.) Similarly, if I have multiple copies of the simulation at the same state tn and all of them have the same input Dn, they should all have the same value for tn+1. In the top level post, having multiple identical copies means that they all start at the same state t0 and are passed in the same inputs D0, D_1, etc as they run in order to force them to remain identical. Because no new information is gained as we run the simulation, the entropy (and thus the value) remains the same no matter how many copies are being run.
2)For examples (a) and (b) you are talking about replacing the input sequence D with a random number generator R. The value of t1 depends on t0 and the output of R. Since R is no longer predictable, there is information being added at each stage. This means the entropy of this new simulation depends on the entropy of R
That is not what Turing complete means. Roughly speaking, something is Turing complete if it can simulate any valid Turing machine. What you are talking about is simply that the state change in question is determined by input data and state. This says nothing about Turing completness of the class of simulations, or even whether the class of simulations can be simulated on Turing machines.. For example, if the physical laws of the universe actually require real numbers then you might need a Blum-Shub-Smale machine to model the simulation.
Oops, I should have said Turing-equivalent. I tend to treat the two concepts as the same because they are the same from a practical perspective. I've updated the post.
I agree with the first part. In the second part, where is the randomness in the information? The set of all N-bit integers is completely predictable for a given N.
For the set of all possible inputs (and thus all possible continuations), yes.
When you say "just the person" do you mean just the person at H(T_n) or a specific continuation of the person at H(T_n)? I would say H(T_n) < H(all possible T_n+1) < H(specific T_n+1)
I agree with the second part.
"More can be said of one apple than of all the apples in the world". (I can't find the quote I'm paraphrasing...)
Escape the underscores to block their markup effect: to get A_i, type "A\_i".