Comment author:Johnicholas
08 September 2009 08:38:59PM
*
8 points
[-]
The skeleton of the argument is:
Present a particular thought experiment, intended to provoke anthropic reasoning. There are two moderately plausible answers, "50%" and "a billion to one against".
Assume for the sake of argument, the answer to the thought experiment is 50%. Note that the "50%" answer corresponds to ignoring the color of the room - "not updating on it" in the Bayesian jargon.
The thought experiment is analogous to the Bolzmann-brain hypothesis. In particular, the color of the room corresponds to the ordered-ness of our experiences.
With the exception of the ordered-ness of our experiences, a stochastic-all-experience-generator would be consistent with all observations.
Occam's Razor: Use the simplest possible hypothesis consistent with observations.
A stochastic-all-experience-generator would be a simple hypothesis.
From 3, 4, 5, and 6, predict that the universe is a stochastic all-experience generator.
From 7, some very unpleasant consequences.
From 8, reject the assumption.
I think the argument can be improved.
According to the minimum description length notion of science, we have a model and a sequence of observations. A "better" model is one that is short and compresses the observations well. The stochastic-all-experience-generator is a short model, but it doesn't compress our observations. I think this is basically saying that according to the MDL version of Occam's Razor, 6 is false.
The article claims that the stochastic-all-experience-generator is a "simple" model of the world and would defeat more common-sense models of the world in an Occam's Razor-off in the absence of some sort of anthropic defense. That claim (6) might be true, but it needs more support.
Comment author:Leoeer
08 January 2013 06:03:14PM
0 points
[-]
Isn't the argument in one false? If one applies bayes' theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don't you get 500000000 to one chances?
The skeleton of the argument is:
I think the argument can be improved.
According to the minimum description length notion of science, we have a model and a sequence of observations. A "better" model is one that is short and compresses the observations well. The stochastic-all-experience-generator is a short model, but it doesn't compress our observations. I think this is basically saying that according to the MDL version of Occam's Razor, 6 is false.
The article claims that the stochastic-all-experience-generator is a "simple" model of the world and would defeat more common-sense models of the world in an Occam's Razor-off in the absence of some sort of anthropic defense. That claim (6) might be true, but it needs more support.
Isn't the argument in one false? If one applies bayes' theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don't you get 500000000 to one chances?