I can't seem to get my head around a simple issue of judging probability. Perhaps someone here can point to an obvious flaw in my thinking.
Let's say we have a binary generator, a machine that outputs a required sequence of ones and zeros according to some internally encapsulated rule (deterministic or probabilistic). All binary generators look alike and you can only infer (a probability of) a rule by looking at its output.
You have two binary generators: A and B. One of these is a true random generator (fair coin tosser). The other one is a biased random generator: stateless (each digit is independently calculated from those given before), with probability of outputting zero p(0) somewhere between zero and one, but NOT 0.5 - let's say it's uniformly distributed in the range [0; .5) U (.5; 1]. At this point, chances that A is a true random generator are 50%.
Now you read the output of first ten digits generated by these machines. Machine A outputs 0000000000. Machine B outputs 0010111101. Knowing this, is the probability of machine A being a true random generator now less than 50%?
My intuition says yes.
But the probability that a true random generator will output 0000000000 should be the same as the probability that it will output 0010111101, because all sequences of equal length are equally likely. The biased random generator is also just as likely to output 0000000000 as it is 0010111101.
So there seems to be no reason to think that a machine outputting a sequence of zeros of any size is any more likely to be a biased stateless random generator than it is to be a true random generator.
I know that you can never know that the generator is truly random. But surely you can statistically discern between random and non-random generators?
I'm not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid's position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
I'm fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don't see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Probably, but I'm having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called "miracle survival stories" and see if there is an excess that can be attributed to the MWI, but I doubt that there is such ... (read more)