Against Devil's Advocacy
From an article by Michael Ruse:
Richard Dawkins once called me a "creep." He did so very publicly but meant no personal offense, and I took none: We were, and still are, friends. The cause of his ire—his anguish, even—was that, in the course of a public discussion, I was defending a position I did not truly hold. We philosophers are always doing this; it's a version of the reductio ad absurdum argument. We do so partly to stimulate debate (especially in the classroom), partly to see how far a position can be pushed before it collapses (and why the collapse), and partly (let us be frank) out of sheer bloody-mindedness, because we like to rile the opposition.
Dawkins, however, has the moral purity—some would say the moral rigidity—of the evangelical Christian or the committed feminist. Not even for the sake of argument can he endorse something that he thinks false. To do so is not just mistaken, he feels; in some deep sense, it is wrong. Life is serious, and there are evils to be fought. There must be no compromise or equivocation, even for pedagogical reasons. As the Quakers say, "Let your yea be yea, and your nay, nay."
Michael Ruse doesn't get it.
Science Doesn't Trust Your Rationality
Followup to: The Dilemma: Science or Bayes?
Scott Aaronson suggests that Many-Worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging:
Libertarianism and MWI are both are grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments).
Now there's an analogy that would never have occurred to me.
I've previously argued that Science rejects Many-Worlds but Bayes accepts it. (Here, "Science" is capitalized because we are talking about the idealized form of Science, not just the actual social process of science.)
It furthermore seems to me that there is a deep analogy between (small-'l') libertarianism and Science:
- Both are based on a pragmatic distrust of reasonable-sounding arguments.
- Both try to build systems that are more trustworthy than the people in them.
- Both accept that people are flawed, and try to harness their flaws to power the system.
Trust in Math
Followup to: Expecting Beauty
I was once reading a Robert Heinlein story - sadly I neglected to note down which story, but I do think it was a Heinlein - where one of the characters says something like, "Logic is a fine thing, but I have seen a perfectly logical proof that 2 = 1." Authors are not to be confused with characters, but the line is voiced by one of Heinlein's trustworthy father figures. I find myself worried that Heinlein may have meant it.
The classic proof that 2 = 1 runs thus. First, let x = y = 1. Then:
- x = y
- x2 = xy
- x2 - y2 = xy - y2
- (x + y)(x - y) = y(x - y)
- x + y = y
- 2 = 1
Now, you could look at that, and shrug, and say, "Well, logic doesn't always work."
Or, if you felt that math had rightfully earned just a bit more credibility than that, over the last thirty thousand years, then you might suspect the flaw lay in your use of math, rather than Math Itself.
You might suspect that the proof was not, in fact, "perfectly logical".
The novice goes astray and says: "The Art failed me."
The master goes astray and says: "I failed my Art."
Affective Death Spirals
Followup to: The Affect Heuristic, The Halo Effect
Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn't have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.
If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that's probably how they got started.
(Just keep an eye out, and you'll observe much that seems to confirm this theory...)
This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.
But it's nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.
A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.
Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.
We Change Our Minds Less Often Than We Think
"Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another. The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%."
—Dale Griffin and Amos Tversky, "The Weighing of Evidence and the Determinants of Confidence." (Cognitive Psychology, 24, pp. 411-435.)
When I first read the words above—on August 1st, 2003, at around 3 o'clock in the afternoon—it changed the way I thought. I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided. We change our minds less often than we think. And most of the time we become able to guess what our answer will be within half a second of hearing the question.
How swiftly that unnoticed moment passes, when we can't yet guess what our answer will be; the tiny window of opportunity for intelligence to act. In questions of choice, as in questions of fact.
Einstein's Arrogance
Prerequisite: How Much Evidence Does It Take?
In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein's novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington's observations failed to match his theory. Einstein famously replied: "Then I would feel sorry for the good Lord. The theory is correct."
It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature's answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?
Of course, Einstein did turn out to be right. I try to avoid
criticizing people when they are right. If they genuinely deserve criticism, I
will not need to wait long for an occasion where they are wrong.
And Einstein may not have been quite so foolhardy as he sounded...
Planning Fallacy
The Denver International Airport opened 16 months late, at a cost overrun of $2 billion (I've also seen $3.1 billion asserted). The Eurofighter Typhoon, a joint defense project of several European countries, was delivered 54 months late at a cost of £19 billion instead of £7 billion. The Sydney Opera House may be the most legendary construction overrun of all time, originally estimated to be completed in 1963 for $7 million, and finally completed in 1973 for $102 million.
Are these isolated disasters brought to our attention by selective availability? Are they symptoms of bureaucracy or government incentive failures? Yes, very probably. But there's also a corresponding cognitive bias, replicated in experiments with individual planners.
Buehler et. al. (1995) asked their students for estimates of when they (the students) thought they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?
Futuristic Predictions as Consumable Goods
The Wikipedia entry on Friedman Units tracks over 30 different cases between 2003 and 2007 in which someone labeled the "next six months" as the "critical period in Iraq". Apparently one of the worst offenders is journalist Thomas Friedman after whom the unit was named (8 different predictions in 4 years). In similar news, some of my colleagues in Artificial Intelligence (you know who you are) have been predicting the spectacular success of their projects in "3-5 years" for as long as I've known them, that is, since at least 2000.
Why do futurists make the same mistaken predictions over and over? The same reason politicians abandon campaign promises and switch principles as expediency demands. Predictions, like promises, are sold today and consumed today. They produce a few chewy bites of delicious optimism or delicious horror, and then they're gone. If the tastiest prediction is allegedly about a time interval "3-5 years in the future" (for AI projects) or "6 months in the future" (for Iraq), then futurists will produce tasty predictions of that kind. They have no reason to change the formulation any more than Hershey has to change the composition of its chocolate bars. People won't remember the prediction in 6 months or 3-5 years, any more than chocolate sits around in your stomach for a year and keeps you full.
The futurists probably aren't even doing it deliberately; they themselves have long since digested their own predictions. Can you remember what you had for breakfast on April 9th, 2006? I bet you can't, and I bet you also can't remember what you predicted for "one year from now".
The Proper Use of Humility
It is widely recognized that good science requires some kind of humility. What sort of humility is more controversial.
Consider the creationist who says: "But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded." Is this humility? The creationist practices a very selective underconfidence, refusing to integrate massive weights of evidence in favor of a conclusion he finds uncomfortable. I would say that whether you call this "humility" or not, it is the wrong step in the dance.
View more: Prev
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)