When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.
Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.
What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.
As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.
When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.
And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.
Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".
Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!
Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?
Bullshit. You aren't providing an example because it is "hard to tell the difference at first". You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
The FAQ on the website is not the place to signal humility and argue against your own conclusions. All that would demonstrate is naivety and incompetence. You are demanding something that should not exist. This isn't to say that there aren't valid criticisms to be made of SIAI and their FAQ. You just haven't made them.
Am I the only person who is outright nauseated by the quality of reasoning in these recent mud-slinging posts by aaronsw? What I see is a hastily selected bottom line along the lines of "SingInst sux" or perhaps "SingInst folks are too arrogant" then whatever hastily conceived rhetoric he can think of to support it. The problem isn't in the conclusions---it is that the arguments used either don't support or outright undermine the conclusion.
Competent criticism is encouraged. But the mere fact that a post is intended to be critical or 'cynical' isn't sufficient. It needs to meet some kind of minimum intellectual standard too. If it did not represent an appeal to the second-order-contrarians and was evaluated based on actual content this post would probably end up mildly negative, even in the discussion section.
This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests - indeed all our interests - ... (read more)