I wonder if the decline of apprenticeships has made overconfidence and underconfidence more common and more severe.
I'm not a history expert, but it seems to me that a blacksmith's apprentice 700 years ago wouldn't have had to worry about over/underconfidence in his skill. (Gender-neutral pronouns intentionally not used here!) He would have known exactly how skilled he was by comparing himself to his master every day, and his master's skill would have been a known quantity, since his master had been accepted by a guild of mutually recognized masters.
Nowadays, because of several factors, calibrating your judgement of your skill seems to be a lot harder. Our education system is completely different, and regardless of whatever else it does, it doesn't seem to be very good at providing reliable feedback to its students, who properly understand the importance of the feedback and respond accordingly. Our blacksmith's apprentice (let's call him John) knows when he's screwed up - the sword or whatever that he's made breaks, or his master points out how it's flawed. And John knows why this is important - if he doesn't fix the problem, he's not going to be able to earn a living.
Whereas a mode...
A friend of mine, normal in most ways, has exceptionally good mental imagery, such that one time she visited my house and saw a somewhat complex 3-piece metalwork puzzle in my living room and thought about it later that evening after she had left, and was able to solve it within moments of picking it up when she visited a second time. At first I was amazed at this, but I soon became more amazed that she didn't find this odd, and that no one had ever realized she had any particular affinity for this kind of thing in all the time she'd been in school. I'm curious as to how many cognitive skills like this there are to excel at and if many people are actually particularly good at one or many of them without realizing it due to a lack of good tests for various kinds of cognition.
My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work.
I should try to remember to try this the next time I have a short piece of code to write. Furthermore, it's the sort of thing that makes me slightly uncomfortable and is therefore easy to forget, so I should try harder to remember it.
In general, this sort of thing seems like a very useful technique if you can do it without endangering your work. Modded parent up.
it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that
I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.
Scientists are frequently advised to never participate in a live debate with a creationist. This is because being right has absolutely nothing to do with winning.
"Debating creationists on the topic of evolution is rather like playing chess with a pigeon - it knocks the pieces over, craps on the board, and flies back to its flock to claim victory." -- Scott D. Weitzenhoffer
Debates are not a rationality competition. They're a Dark Arts competition, in which the goal is to use whatever underhanded trick you can come up with in order to convince somebody to side with you. Evidence doesn't matter, because it's trivial to simply lie your ass off and get away with it.
The only kind of debates worth having are written debates, in which, when someone tells a blatant lie, you can look up the truth somewhere and take all the space you need to explain why it's a lie - and "cite your sources, or you forefeit" is a reasonable rule.
Indeed. Association fallacy. Eliezer might not think much of his loss, but it would still be seen by people as a loss for "the atheists" and a victory for "the theists". Debate to win!
Who is this theist? I'm interested in watching these debates. (though obviously without knowledge of the specific case, I agree with ciphergoth. It's not just about you, it's about whoever's watching.)
In particular, I still don't have a counter to the fine-tuning argument which is short, assumes no foreknowledge, and is entirely intellectually honest.
The "fine-tuning" argument falls into the script:
If you accept that script you lose the debate, because there will always be some odd fact that can't currently be explained. (And even if it can actually be explained, you won't have time to explain it within the limits of the debate and the audience's knowledge.)
The trap is that it is a very temping mistake to try and solve the puzzle yourself. It's highly unlikely that you will succeed, and your opponent will already know the flaws (and counter-arguments) to most of the existing solution attempts, so can throw those at you. Or if you support a fringe theory (which isn't generally considered in the solution space, but might work), the opponent can portray you as a marginal loon.
I suspect that the theist wins these debates because most opponents fall into that trap. They are smart enough that they think that they can resolve the puzzle in question, and so walk right into...
Unfair debate proposal
You want a debate in which the tables are tilted against you? I see a way to do that which doesn't carry the risks of your current proposal.
A bunch of us get together on an IRC channel and agree to debate you. We thrash out our initial serve; we then spring the topic and our initial serve on you. You must counter immediately, with no time to prepare. We then go away and mull over your counter, and agree a response, which you must again immediately respond to.
We can give ourselves more speaking time than you in each exchange, too, if you want to tilt the tables further (I'm imagining the actual serves and responses being delivered as video).
I've found some of the characterizations of Craig's arguments and debate style baffling.
When he debates the existence of god, he always delivers the same five arguments (technically, it's four: his fifth claim is that god can be known directly, independently of any argument). He develops these arguments as carefully as time allows, and defends each of his premises. He uses the kalam cosmological argument, the fine tuning argument, the moral argument, and the argument from the resurrection of Jesus. This can hardly be characterized as dumping.
Also, his arguments are logically valid; you won't see any, 'brain teaser, therefore god!' moves from him. He's not only a 'theologian'; he's a trained philosopher (he actually has two earned PHDs, one in philosophy and one in theology).
Finally, Craig is at his best when it comes to his responses. He is extremely quick, and is very adept at both responding to criticisms of his arguments, and at taking his opponent's arguments apart.
Debating William Lane Craig on the topic of god's existence without preparation would be as ill advised as taking on a well trained UFC fighter in the octagon without preparation. To extend the analogy further, it would be like thinking it's a good idea because you've won a couple of street fights and want to test yourself.
It sounds as though you're viewing the debate as a chance to test your own abilities at improvisational performance. That's the wrong goal. Your goal should be to win.
"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him."
By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.
In choosing not to try for that, you'll end up sending the message that rationalists don't play to win. You and I know this isn't quite accurate -- what you're doing is more like a rationalist choosing to lose a board game, because that served some other, real purpose of his -- but that is still how it will come across. Do you consider this to be acceptable?
Cultivate a habit of confronting challenges - not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.
You may be interested to learn that high-end mountaineers apply exactly the strategy you describe to challenges that might kill them outright. Mick Fowler even states it explicitly in his autobiography - "success every time implies that one's objectives are not challenging enough".
A large part of mountaineering appears to be about identifying the precise point where your situation will become unrecoverable, and then backing off just before you reach it. On the other hand, sometimes you just get unlucky.
A slogan I like is that failure is OK, so long as you don't stop trying to avoid it.
While reading this post, a connection with Beware of Other-Optimizing clicked in my mind. Different aspiring rationalists are (more) susceptible to different failure modes. From Eliezer's previous writings it had generally seemed like he was more worried about the problem of standards (for oneself) that are too low -- that is, not being afraid enough of failure -- than about the opposite error, standards that are too high. But I suspect that's largely specific to him; other...
And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.
This really bothers me, because you weren't just risking your own public humiliation; you were risking our public humiliation. You were endangering an important cause for your personal benefit.
There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.
Were we ever told the other two?
Yes, by Jeffreyssai:
..."Three flaws above all are common among the beisutsukai. The first flaw is to look just the slightest bit harder for flaws in arguments whose conclusions you would rather not accept. If you cannot contain this aspect of yourself then every flaw you know how to detect will make you that much stupider. This is the challenge which determines whether you possess the art or its opposite: Intelligence, to be useful, must be used for something other than defeating itself."
"The second flaw is cleverness. To invent great complicated plans and great complicated theories and great complicated arguments - or even, perhaps, plans and theories and arguments which are commended too much by their elegance and too little by their realism. There is a widespread saying which runs: 'The vulnerability of the beisutsukai is well-known; they are prone to be too clever.' Your enemies will know this saying, if they know you for a beisutsukai, so you had best remember it also. And you may think to yourself: 'But if I could never try anything clever or elegant, would my life even be worth living?' This is why cleverness is still our chief vulnerability even aft
gjm asks wisely:
What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?
The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatev...
We have lots of experimental data showing overconfidence; what experimental data show a consistent underconfidence, in a way that a person could use that data to correct their error? This would be a lot more persuasive to me than the mere hypothetical possibility of underconfidence.
I skimmed several debates with WLC yesterday, referenced here. His arguments are largely based on one and the same scheme:
(Or something like this, the step 3 is a bit more subtle than I made it out to be.) What's remarkable, even though he uses a nontrivial number of paradoxes for the step 2, almost all of them were explicitly explained in the mater...
Many of WLC's arguments have this rough structure:
That's why I think that in order to debate him you have to explicitly challenge the idea that God could ever be a good answer to anything; otherwise, you disappear down the rabbit hole of trying to straighten out the philosophical confusions of your audience.
You shouldn't because even though when you speak the word "God" you simply intend "placeholder for whatever eventually solves the creation conundrum," it will be heard as meaning "that being to which I was taught to pray when I was a child" -- whether you like it or not, your listener will attach the fully-formed God-concept to your use of the word.
There is a children's puzzle which consists of 15 numbered square blocks arranged in a frame large enough to hold 16, four by four, leaving one empty space. You can't take the blocks out of the frame. You can only slide a block into the empty space from an adjacent position. The puzzle is to bring the blocks into some particular arrangement.
The mathematics of which arrangements are accessible from which others is not important here. The key thing is that no matter how you move the blocks around, there is always an empty space. Wherever the space is, you ca...
This post reminds me of Aristotle's heuristics for approaching the mean when one tends towards the extremes:
"That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but fo...
Did you write a cost function down for the various debate outcomes? The skew will inform whether overconfidence or underconfidence should be weighted differently.
Eliezer, does your respect for Aumann's theorem incline you to reconsider, given how many commenters think you should thoroughly prepare for this debate?
What is the danger of overconfidence?
Passing up opportunities. Not doing thing you could have done, but didn't try (hard enough).
Did you mean "danger of underconfidence"?
Can anyone give some examples of being underconfident, that happened as a result of overcorrecting for overconfidence?
Eliezer said:
So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence - heck, even if you've just read a couple of dozen - and you don't know exactly how overconfident you are - then yes, you might genuinely be in danger of nudging yourself a step too far down.
I also observed this phenomenon of debiasing being over-emphasized in discussions of rationality, while heuristic is treated as a bad word. I tried to get at the problem of passing...
Playing tic-tac-toe against a three-year-old for the fate of the world would actually be a really harrowing experience. The space of possible moves is small enough that he's reasonably likely to force a draw just by acting randomly.
This post reminds me of the phrase "cognitive hyper-humility," used by Ben Kovitz's Sophistry Wiki:
...Demand for justification before making a move. Of course, this is not always sophistry. In some special areas of life, such as courtroom trials, we demand that a "burden" of specific kinds of evidence be met as a precondition for taking some action. Sophistry tends to extend this need for justification far beyond the areas where it's feasible and useful. Skeptical sophistry tends to push a sort of cognitive hyper-humility, or freezing ou
If there was a message I could send back to my younger self this would be it. Plus that if it's hard, don't try to make it easier, just keep in mind that it's important. (By younger self, I mean 7-34 years old.)
IHAPMOE, but the post seems to assume that a person's "rationality" is a float rather than a vector. If you're going to try to calibrate your "rationality", you'd better try to figure out what the different categories of rationality problems there are, and how well rationality on one category of problems correlates with rationality on other categories. Otherwise you'll end up doing something like having greater confidence in your ethical judgements because you do well at sudoku.
Does the fact that I find this guy's formulation of the cosmological argument somewhat persuasive mean that I can't hang out with the cool kids anymore? I'm not saying it is an airtight argument, just that it isn't obviously meaningless or ridiculous metaphysics.
Slightly off-topic:
I don't know if it would be possible to arrange either of them, but there are two debates I'd love to see Eliezer in:
A debate with Amanda Marcotte on evolutionary psychology
and
A debate with Alonzo Fyfe on meta-ethics.
This seems like a reflection of a general problem people have, the problem of not getting things done - more specifically, the problem of not getting things done by convincing yourself not to do them.
It's so much easier to NOT do things than do them, so we're constantly on the lookout for ways not to do them. Of course, we feel bad if we simply don't do them, so we first have to come up with elaborate reasons why it's ok - "I'll have plenty of time to do it later", "There's too much uncertainty", "I already got alot of work done to...
Eliezer should write a self-help book! Blog posts like the above are very inspiring to this perennial intellectual slacker and general underachiever (meaning: me).
I certainly can relate to this part:
"It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...
There's not enough hope of triumph to inspire you to try hard..."
Overconfidence is usually costlier than underconfidence. The cost to become completely accurate is often greater than the benefit of being slightly-inaccurate-but-close-enough.
When these two principles are taken into account, underconfidence becomes an excellent strategy. It also leaves potential in reserve in case of emergencies. As being accurately-confident tends to let others know what you can do, it's often desirable to create a false appearance.
A typo in the article: "What is the danger of overconfidence?" -> "What is the danger of underconfidence?"
...the third of these is underconfidence. Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.
Well, that sure is odd. Guess that's why Vassar was promoted. It makes sense now.
Anyway, EY's history doesn't seem to me marked by much underconfidence. For example, his name has recently been used in vain at this silly blog, where they're dredging up all sorts of amusing material that seems to support the opposite conclusion.
Since I know EY has guru status around here, please don't jump down my throat...
There are three great besetting sins of rationalists in particular, and the third of these is underconfidence. Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.
But he's actually quite right to worry, and I worry too, and any adept rationalist will probably spend a fair amount of time worying about it. When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome—you know you're biased but you're not sure how much, and you don't know if you're correcting enough—and so perhaps you ought to correct a little more, and then a little more, but is that enough? Or have you, perhaps, far overshot? Are you now perhaps worse off than if you hadn't tried any correction?
You contemplate the matter, feeling more and more lost, and the very task of estimation begins to feel increasingly futile...
And when it comes to the particular questions of confidence, overconfidence, and underconfidence—being interpreted now in the broader sense, not just calibrated confidence intervals—then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt. To place yourself too high—to overreach your proper place—to think too much of yourself—to put yourself forward—to put down your fellows by implicit comparison—and the consequences of humiliation and being cast down, perhaps publicly—are these not loathesome and fearsome things?
To be too modest—seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.
So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence—heck, even if you've just read a couple of dozen—and you don't know exactly how overconfident you are—then yes, you might genuinely be in danger of nudging yourself a step too far down.
I have no perfect formula to give you that will counteract this. But I have an item or two of advice.
What is the danger of underconfidence?
Passing up opportunities. Not doing things you could have done, but didn't try (hard enough).
So here's a first item of advice: If there's a way to find out how good you are, the thing to do is test it. A hypothesis affords testing; hypotheses about your own abilities likewise. Once upon a time it seemed to me that I ought to be able to win at the AI-Box Experiment; and it seemed like a very doubtful and hubristic thought; so I tested it. Then later it seemed to me that I might be able to win even with large sums of money at stake, and I tested that, but I only won 1 time out of 3. So that was the limit of my ability at that time, and it was not necessary to argue myself upward or downward, because I could just test it.
One of the chief ways that smart people end up stupid, is by getting so used to winning that they stick to places where they know they can win—meaning that they never stretch their abilities, they never try anything difficult.
It is said that this is linked to defining yourself in terms of your "intelligence" rather than "effort", because then winning easily is a sign of your "intelligence", where failing on a hard problem could have been interpreted in terms of a good effort.
Now, I am not quite sure this is how an adept rationalist should think about these things: rationality is systematized winning and trying to try seems like a path to failure. I would put it this way: A hypothesis affords testing! If you don't know whether you'll win on a hard problem—then challenge your rationality to discover your current level. I don't usually hold with congratulating yourself on having tried—it seems like a bad mental habit to me—but surely not trying is even worse. If you have cultivated a general habit of confronting challenges, and won on at least some of them, then you may, perhaps, think to yourself "I did keep up my habit of confronting challenges, and will do so next time as well". You may also think to yourself "I have gained valuable information about my current level and where I need improvement", so long as you properly complete the thought, "I shall try not to gain this same valuable information again next time".
If you win every time, it means you aren't stretching yourself enough. But you should seriously try to win every time. And if you console yourself too much for failure, you lose your winning spirit and become a scrub.
When I try to imagine what a fictional master of the Competitive Conspiracy would say about this, it comes out something like: "It's not okay to lose. But the hurt of losing is not something so scary that you should flee the challenge for fear of it. It's not so scary that you have to carefully avoid feeling it, or refuse to admit that you lost and lost hard. Losing is supposed to hurt. If it didn't hurt you wouldn't be a Competitor. And there's no Competitor who never knows the pain of losing. Now get out there and win."
Cultivate a habit of confronting challenges—not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you. I recently read of a certain theist that he had defeated Christopher Hitchens in a debate (severely so; this was said by atheists). And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out. Note that this is not self-handicapping in the classic sense—if the debate is indeed arranged (I haven't yet heard back), and I do not prepare, and I fail, then I do lose those stakes of myself that I have put up; I gain information about my limits; I have not given myself anything I consider an excuse for losing.
Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost. In that case you make everything as easy for yourself as possible. To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.
A subtler form of underconfidence is losing your forward momentum—amid all the things you realize that humans are doing wrong, that you used to be doing wrong, of which you are probably still doing some wrong. You become timid; you question yourself but don't answer the self-questions and move on; when you hypothesize your own inability you do not put that hypothesis to the test.
Perhaps without there ever being a watershed moment when you deliberately, self-visibly decide not to try at some particular test... you just.... slow..... down......
It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...
There's not enough hope of triumph to inspire you to try hard...
When you consider doing any new thing, a dozen questions about your ability at once leap into your mind, and it does not occur to you that you could answer the questions by testing yourself...
And having read so much wisdom of human flaws, it seems that the course of wisdom is ever doubting (never resolving doubts), ever the humility of refusal (never the humility of preparation), and just generally, that it is wise to say worse and worse things about human abilities, to pass into feel-good feel-bad cynicism.
And so my last piece of advice is another perspective from which to view the problem—by which to judge any potential habit of thought you might adopt—and that is to ask:
Does this way of thinking make me stronger, or weaker? Really truly?
I have previously spoken of the danger of reasonableness—the reasonable-sounding argument that we should two-box on Newcomb's problem, the reasonable-sounding argument that we can't know anything due to the problem of induction, the reasonable-sounding argument that we will be better off on average if we always adopt the majority belief, and other such impediments to the Way. "Does it win?" is one question you could ask to get an alternate perspective. Another, slightly different perspective is to ask, "Does this way of thinking make me stronger, or weaker?" Does constantly reminding yourself to doubt everything make you stronger, or weaker? Does never resolving or decreasing those doubts make you stronger, or weaker? Does undergoing a deliberate crisis of faith in the face of uncertainty make you stronger, or weaker? Does answering every objection with a humble confession of you fallibility make you stronger, or weaker?
Are your current attempts to compensate for possible overconfidence making you stronger, or weaker? Hint: If you are taking more precautions, more scrupulously trying to test yourself, asking friends for advice, working your way up to big things incrementally, or still failing sometimes but less often then you used to, you are probably getting stronger. If you are never failing, avoiding challenges, and feeling generally hopeless and dispirited, you are probably getting weaker.
I learned the first form of this rule at a very early age, when I was practicing for a certain math test, and found that my score was going down with each practice test I took, and noticed going over the answer sheet that I had been pencilling in the correct answers and erasing them. So I said to myself, "All right, this time I'm going to use the Force and act on instinct", and my score shot up to above what it had been in the beginning, and on the real test it was higher still. So that was how I learned that doubting yourself does not always make you stronger—especially if it interferes with your ability to be moved by good information, such as your math intuitions. (But I did need the test to tell me this!)
Underconfidence is not a unique sin of rationalists alone. But it is a particular danger into which the attempt to be rational can lead you. And it is a stopping mistake—an error which prevents you from gaining that further experience which would correct the error.
Because underconfidence actually does seem quite common among aspiring rationalists who I meet—though rather less common among rationalists who have become famous role models)—I would indeed name it third among the three besetting sins of rationalists.