Confidence In Opinions, Intensity In Opinion
On a scale of 1 to 100, how sure are you?
It's a good thing to ask yourself from time to time about intense beliefs, especially if you're having a disagreement with someone else smart.
Just putting a number on something is good. If you're in business, putting any number in the high 90's is dangerous and shouldn't happen too often.
Yet, you still have to aggressively and intensely pursue your plans.
You can be only 80% sure you're correct, and still intensely pursue a course of action.
Most people make a mistake: they only go intensely after things they have a very high certainty will work.
But this is backwards. It's absolutely right to say "I'm only 80% sure that going and making a great talk to this group will help develop my business," and to still aggressively pursue giving a great talk.
The same is true with having ridiculously exceptionally good service. You can say, "I'm only 60% sure that doing this is going to lead to more customer loyalty... this might just be a time sink and cost more than it returns. But let's kill it on it, and find it."
You don't need to be highly confident to intensely pursue something.
In fact, intensely pursuing not-certain things seems to be how the world develops.
How to calibrate your political beliefs
So you're playing the credence game, and you’re getting a pretty good sense of which level of confidence to assign to your beliefs. Later, when you’re discussing politics, you wonder how you can calibrate your political beliefs as well (beliefs of the form "policy X will result in outcome Y"). Here there's no easy way to assess whether a belief is true or false, in contrast to the trivia questions in the credence game. Moreover, it’s very easy to become mindkilled by politics. What do you do?
In the credence game, you get direct feedback that allows you to learn about your internal proxies for credence, i.e., emotional and heuristic cues about how much to trust yourself. With political beliefs, however, there is no such feedback. One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
To account for mindkilling, use a second criterion: assign high confidence only to beliefs for which you are ideologically Turing-capable (i.e., able to pass an ideological Turing test). As a proxy for an actual ideological Turing test, you should be able to accurately restate your opponent’s position, or be able to state the strongest counterargument to your position.
In sum, to calibrate your political beliefs, only assign high confidence to beliefs which satisfy extremely demanding epistemic standards.
[LINK] How to calibrate your confidence intervals
In the book "How to Measure Anything" D. Hubbard presents a step-by-step method for calibrating your confidence intervals, which he has tested on hundreds of people, showing that it can make 90% of people almost perfect estimators within half a day of training.
I've been told that the Less Wrong and CFAR community is mostly not aware of this work, so given the importance of making good estimates to rationality, I thought it would be of interest.
(although note CFAR has developed its own games for training confidence interval calibration)
The main techniques to employ are:
Equivalent bet:
For each estimate imagine that you are betting $1000 on the answer being within your 90% CI. Now compare this to betting $1000 on a spinner where 90% of the time you win and 10% of the time you lose. Would you prefer to take a spin? If so, your range is too small and you need to increase it. If you decide to answer the question your range is too large and you need to reduce it. If you don’t mind whether you answer the question or take a spin then it really is your 90% CI.
Absurdity Test:
Start with an absurdly large range, maybe from minus infinity to plus infinity, and then begin reducing it based upon things you know to be highly unlikely or even impossible.
Avoid Anchoring:
Anchoring occurs when you think of a single answer to the question and then add an error around this answer; this often leads to ranges which are too narrow. Using the absurdity test is a good way to counter problems brought on by anchoring; another is to change how you look at your 90% CI. For a 90% CI there is a 10% chance that the answer lies outside your estimate, and if you split this there is a 5% chance that the answer is above your upper bound and a 5% chance that the answer is below your lower bound. By treating each bound separately, rephrase the question to read ‘is there a 95% chance that the answer is above my lower bound?’. If the answer is no, then you need to increase or decrease the bound as required. You can then repeat this process for the other bound.
Pros and cons:
Identify two pros and two cons for the range that you have given to help clarify your reasons for making this estimate.
Once you have used these techniques you can make another equivalent bet to check whether your new estimate is your 90% CI.
To train yourself, practice making estimates repeatedly while using these techniques, until you reach 100% accuracy.
To read more and try sample questions, read the article we prepared on 80,000 Hours here.
[Link] You May Already Be Aware of Your Cognitive Biases
From the article:
Using an adaptation of the standard 'bat-and-ball' problem, the researchers explored this phenomenon. The typical 'bat-and-ball' problem is as follows: a bat and ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost? The intuitive answer that immediately springs to mind is 10 cents. However, the correct response is 5 cents.
The authors developed a control version of this problem, without the relative statement that triggers the substitution of a hard question for an easier one: A magazine and a banana together cost $2.90. The magazine costs $2. How much does the banana cost?
A total of 248 French university students were asked to solve each version of the problem. Once they had written down their answers, they were asked to indicate how confident they were that their answer was correct.
Only 21 percent of the participants managed to solve the standard problem (bat/ball) correctly. In contrast, the control version (magazine/banana) was solved correctly by 98 percent of the participants. In addition, those who gave the wrong answer to the standard problem were much less confident of their answer to the standard problem than they were of their answer to the control version. In other words, they were not completely oblivious to the questionable nature of their wrong answer.
Article in Science Daily: http://www.sciencedaily.com/releases/2013/02/130219102202.htm
Original abstract (the rest is paywalled): http://link.springer.com/article/10.3758/s13423-013-0384-5
How confident should we be?
What should a rationalist do about confidence? Should he lean harder towards
- relentlessly psyching himself up to feel like he can do anything, or
- having true beliefs about his abilities in all areas, coldly predicting his likelihood of success in a given domain?
I don't want to falsely construe these as dichotomous. The real answer will probably dissolve 'confidence' into smaller parts and indicate which parts go where. So which parts of 'confidence' correctly belong in our models of the world (which must never be corrupted) or our motivational systems (which we may cut apart and put together however helps us achieve our goals)? Note that this follows the distinction between epistemic and instrumental rationality.
Eliezer offers a decision criterion in The Sin of Underconfidence:
Does this way of thinking make me stronger, or weaker? Really truly?
It makes us stronger to know when to lose hope already, and it makes us stronger to have the mental fortitude to kick our asses into shape so we can do the impossible. Lukeprog prescribes boosting optimism "by watching inspirational movies, reading inspirational biographies, and listening to motivational speakers." That probably makes you stronger too.
But I don't know what to do about saying 'I can do it' when the odds are against me. What do you do when you probably won't succeed, but believing that Heaven's army is at your back would increase your chances?
My default answer has always been to maximize confidence, but I acted this way long before I discovered rationality, and I've probably generated confidence for bad reasons as often as I have for good reasons. I'd like to have an answer that prescribes the right action, all of the time. I want know when confidence steers me wrong, and know when to stop increasing my confidence. I want the real answer, not the historically-generated heuristic.
I can't help but feeling like I'm missing something basic here. What do you think?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)