Response to Man-with-a-hammer syndrome.
It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.
There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.
Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false.
If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.
Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.
Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.
First of all I'm not "imagining a better eye"; by "fantastic eye" I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1/3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.
So I'm saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn't it purchase 3 whole units for only 5 bits -- the same 3 units it previously purchased for 3333 bits?
I am aware of your general point that natural selection doesn't always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn't reduce my surprise.
Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don't increase the posterior probability of observing the data as if your story is evidence for it -- stay confused.
Perhaps, in the current environment, those 3 units aren't worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)
This suggestion doesn't preclude other hypotheses; in fact, I'm not even intending to suggest that it's a particularly likely scenario - hence my us... (read more)