If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong. One of our known defects is our tendency to stick with our beliefs for too long. But if you do this successfully, you will become less certain and therefore less determined.
Normatively, seeking disconfirmation and not finding it should make you more certain. And if you do become less certain, I'm not convinced this necessarily makes you less determined – why couldn't it heighten your curiosity, or (especially if you have...
Where do you find in that link the suggestion that rationalists should be less confident?
One who sees that people generally overestimate themselves, and responds by downgrading their own self-confidence, imitates the outward form of the art without the substance.
One who seeks only to destroy their beliefs practices only half the art.
The rationalist is precisely as confident as the evidence warrants. But if he has too little evidence to vanquish his priors, he does not sit content with precisely calibrated ignorance. If the issue matters to him, he must s...
I'm not aware of studies showing that those in the upper 10% overestimate their abilities. Anyone trying to increase their rationality is probably in the upper 10% already.
My recollection is that at least one study showed some regression to the mean in confidence -- highly-skilled people tended to underestimate themselves.
Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?
Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case? Or the one with something to protect?
Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?
Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case?
(Really, though, it's going to be the one with something to protect.)
I'd say the benefits have to outweigh the costs. If you succeed in achieving your goal despite holding a significant number of false beliefs relevant to this goal, it means you got lucky: Your success wasn't caused by your decisions, but by circumstances that just happened to be right.
That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn't change the fact that luck only favors us a small fraction of the time, by definition.
OK, I see you don't believe me that you should sometimes accept and sometimes reject epistemic rationality for a price. So here's a simple mathematical model:
Let's say agent A accepts the offer of increased epistemic rationality for a price, and agent N has not accepted it. P is the probability A will decide differently than N. F(A or N) is the expected value of N's original course of action as a function of the agent who takes it, while S(A) is the expected value of the course of action that A might switch to. If there is a cost C associated with becoming agent A, then agent N should become agent A if and only if
(1 - P) F(A) + P S(A) - C >= F(N)
The left side of the equation is not bigger than the right side "by definition"; it depends on the circumstance. Eliezer's dessert-ordering example is a situation where the above inequality does not hold.
If you complain that agent N can't possibly know all the variables in the equation, then I agree with you. He will be estimating them somewhat poorly. However, that complaint in no way supports the view that the left side is in fact bigger. Someone once said that "Anything you need to quantify can be measured in some way that is superior to not measuring it at all." Just like the difficulty of measuring utility is not a valid objection to utilitarianism, the difficulty of guessing what a better-informed self would do is not a valid objection to using this equation.
that luck only favors us a small fraction of the time, by definition.
That's a funny definition of "luck" you're using.
Yes, the right side can be bigger, and occasionally it will be. If you get lucky.
If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.
That's a funny definition of "luck" you're using.
It is? Why do you think people are pleasantly surprised when they get lucky, if not because it's a rare occurrence?
Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price. What is such an agent's optimal strategy?
A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs. I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational.
And now for the kicker: You might be such an agent.
One technique that humans can use to work towards epistemic rationality is to doubt themselves, since most people think they are above average in a wide variety of areas (and it's reasonable to assume that merit in at least some of these areas is normally distributed.) But having a negative explanatory style, which is one way to doubt yourself, has been linked with sickness and depression.
And the inverse is also true. Humans also seem to be rewarded for a certain set of beliefs: those that help them maintain a somewhat-good assessment of themselves. Having an optimistic explanatory style (in an nutshell, explaining good events in a way that makes you feel good, and explaining bad events in a way that doesn't make you feel bad) has been linked with success in sports, sales and school.
If you're unswayed by my empirical arguments, here's a theoretical one. If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong. One of our known defects is our tendency to stick with our beliefs for too long. But if you do this successfully, you will become less certain and therefore less determined.
In some circumstances, it's good to be less determined. But in others, it's not. And to say that one should always look for disconfirming evidence, or that one should always avoid looking for disconfirming evidence, is idealogical according to the instrumental rationalist.
Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?
You rarely see a self-help book, entreprenuership guide, or personal development blog telling people how to be less confident. But that's what an advocate of rationalism does. The question is, do the benefits outweigh the costs?