How do you figure?
They both fit the same basic evidence: burning a candle or similar object in a small enclosure made it go out. Similar remarks applied to small living animals and combinations of candles and animals. Moreover, many forms of combustion visibly gave off something into the air. Indeed the theory "combustion occurs when something from the substance goes into the air" is simpler than "combustion occurs when something from the air combines with the substance and sometimes but not always something else is added back into the air." It was only with careful measurements of the mass of objects before and after reactions (weighing gasses is really tough!), combined with the observation that some metals gained weight when being burned that really created a problem. A good Bayesian in 1680 who heard of both ideas arguably should favor phlogiston.
It's not a post about how things usually go. It's a post about the minimum requirements to know something with near certainty for an intelligent agent.
It is possible that I'm reading too much into this but it does seem that Eliezer is using Einstein's success as an actual example of his argument about how brains should work. But there's a problem: if brains are less than perfect Bayesians (and it seems that minds that are possible in this part of the Tegmark ensemble fit in that collection) they won't bring one hypothesis to the front, they will often have a fair number of hypotheses to explain based on incomplete data. In some cases, like Einstein, the sheer mathematical simplicity (in his case hitting on the simplest hypothesis that hit a large set of nice conditions (not to say that it is at all easy. Far from it.)) will make one hypothesis look like under some framework it takes less data than the others. But often the actual process will be that they need more data.
A mind, when investigating things will likely not just keep getting more and more clever insights. Things take effort. Let's say you have a really smart strongly-Bayesian mind with the control of the resources of a planet, but with minimal prior knowledge. So it can likely figure some things out pretty quickly like the orbits of the planets, and some other stuff. But somewhere between that and trying to detect fundamental particles of the universe it will probably need to collect more data. The mind isn't going to have any way to detect that neutrinos have mass (even if it suspects that) until it sees evidence that they oscillate. Etc. I suspect that no mind from the simple data that humans have from our naive senses will deduce the existence of quarks.
In physics, if they've truly narrowed it down like that, the conclusion is that they ought not need more evidence, not that the social forces of science will deterministically overturn every confusion dividing a professional field.
So this seems like a more valid point: There are problems of human cognitive biases that go in the other direction (that is making theories to overfit our data and our general tendency to be overconfident in our beliefs), But an actual good Bayesian should not need to specially test a hypothesis once the pre-existing evidence has singled it out as extremely likely. This feeling that we need to do this is an artifact of having to deal with the problems of human cognition and social biases.
If that's what Eliezer meant I don't think he said it very well.
Indeed the theory "combustion occurs when something from the substance goes into the air" is simpler...
Seems like a simpler theory. Is a shorter sentence.
The mind isn't going to have any way to detect that neutrinos have mass (even if it suspects that) until it sees evidence that they oscillate. Etc.
Sure, knowledge increases far more than arithmetically with additions of either smarts or data.
...But an actual good Bayesian should not need to specially test a hypothesis once the pre-existing evidence has singled it out as extremely likely.
Today's post, Einstein's Arrogance was originally published on 25 September 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was How Much Evidence Does It Take?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.