Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price.  What is such an agent's optimal strategy?

A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs.  I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational.

And now for the kicker: You might be such an agent.

One technique that humans can use to work towards epistemic rationality is to doubt themselves, since most people think they are above average in a wide variety of areas (and it's reasonable to assume that merit in at least some of these areas is normally distributed.)  But having a negative explanatory style, which is one way to doubt yourself, has been linked with sickness and depression.

And the inverse is also true.  Humans also seem to be rewarded for a certain set of beliefs: those that help them maintain a somewhat-good assessment of themselves.  Having an optimistic explanatory style (in an nutshell, explaining good events in a way that makes you feel good, and explaining bad events in a way that doesn't make you feel bad) has been linked with success in sports, sales and school.

If you're unswayed by my empirical arguments, here's a theoretical one.  If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong.  One of our known defects is our tendency to stick with our beliefs for too long.  But if you do this successfully, you will become less certain and therefore less determined.

In some circumstances, it's good to be less determined.  But in others, it's not.  And to say that one should always look for disconfirming evidence, or that one should always avoid looking for disconfirming evidence, is idealogical according to the instrumental rationalist.

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

You rarely see a self-help book, entreprenuership guide, or personal development blog telling people how to be less confident.  But that's what an advocate of rationalism does.  The question is, do the benefits outweigh the costs?

New Comment
77 comments, sorted by Click to highlight new comments since: Today at 11:02 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong. One of our known defects is our tendency to stick with our beliefs for too long. But if you do this successfully, you will become less certain and therefore less determined.

Normatively, seeking disconfirmation and not finding it should make you more certain. And if you do become less certain, I'm not convinced this necessarily makes you less determined – why couldn't it heighten your curiosity, or (especially if you have... (read more)

0John_Maxwell15y
I'm beginning to suspect that providing theoretical justifications for inherently irrational humans is a waste of time. All I can say is that my empirical evidence still holds, and I observe this in myself. Believing that I'm going to fail demoralizes me. It doesn't energize me. I'd love to have my emotions wired the way you describe. The point is that the student benefits from believing something regardless of its truth value. The greater the extent to which they believe their idea is valid, the more thinking about math they'll do. I'd love to hear your Third Alternative. It seems to happen occasionally.

Where do you find in that link the suggestion that rationalists should be less confident?

One who sees that people generally overestimate themselves, and responds by downgrading their own self-confidence, imitates the outward form of the art without the substance.

One who seeks only to destroy their beliefs practices only half the art.

The rationalist is precisely as confident as the evidence warrants. But if he has too little evidence to vanquish his priors, he does not sit content with precisely calibrated ignorance. If the issue matters to him, he must s... (read more)

2John_Maxwell15y
"Beware lest you become attached to beliefs you may not want." "Surrender to the truth as quickly as you can." Not necessarily. If there is no known way to correct for a bias, it makes sense to do the sort of gross correction I described. For example, if I know that I and my coworker underestimate how long my projects take but I'm not aware of any technique I can use to improve my estimates, I could start by asking my co-worker to do all the estimates and then multiply that estimate by two when telling my boss. Is there are known way of correcting for human overconfidence? If not, I think the sort of gross correction I describe makes sense from an epistemically rational point of view. Do you deny that believing you had the answer to a mathematical problem and only lacked a proof would be a powerful motivator to think about mathematics? I was once in this situation, and it certainly motivated me. The way I used "duty" has nothing to do with disliking a thing. To me, "duty" describes something that you feel you ought to do. It's just an inconvenient fact about human psychology that telling yourself that something's best makes it hard to do it. Being epistemically rational (figuring out what the best thing to do is, then compelling yourself to do it) often seems not to work for humans.
7Richard_Kennaway15y
If there is no known way to correct for a bias, the thing to do is to find one. Swerving an arbitrary amount in the right direction will not do -- reversed stupidity etc. I once saw a poster in a chemist's shop bluntly asserting, "We all eat too much salt." What was I supposed to do about that? No matter how little salt I take in, or how far I reduce it, that poster would still be telling me the same thing. No, the thing to do, if I think it worth attending to, would be to find out my actual salt intake and what it should actually be. Then "surrender to the truth" and confidently do what the result of that enquiry tells me. If someone finds it hard to do what they believe that they should and can, then their belief is mistaken, or at least incomplete. They have other reasons for not doing whatever it is, reasons that they are probably unaware of when they merely fret about what they ought to be doing. Compelling oneself is unnecessary when there is nothing to overcome. The root of indecision is conflict, not doubt; irrationality, not rationality. Here's a quote about rationality in action from a short story recently mentioned on LW, a classic of SF that everyone with an interest in rationality should read. I find that a more convincing picture than one of supine doubt.
4John_Maxwell15y
Reversing stupidity is not the same thing as swerving an arbitrary amount in the right direction. And the amount is not arbitrary: like most of my belief changes, it is based on my intuition. This post by Robin Hanson springs to mind; see the last sentence before the edit. Anyway, some positive thoughts I have about myself are obviously unwarranted. I'm currently in the habit of immediately doubting spontaneous positive thoughts (because of what I've read about overconfidence), but I'm beginning to suspect that my habit is self-destructive. Well yes, of course, it's easier to do something if you believe you can. That's what I'm talking about--confidence (i.e. believing you can do something) is valuable. If there's no chance of the thing going wrong, then you're often best off being overconfident to attain this benefit. That's pretty much my point right there. As for your Heinlein quote, I find it completely unrealistic. Either I am vastly overestimating myself as one of Heinlein's elite, I am a terrible judge of people because I put so many of them into his elite, or Heinlein is wrong. I find it ironic, however, that someone who read the quote would probably be pushed towards the state of mind I am advocating: I'm pretty sure 95% of those who read it put themselves somewhere in the upper echelons, and once in the upper echelons, they are free to estimate their ability highly and succeed as a result.
4Nick_Tarleton15y
Are you in the habit of immediately doubting negative thoughts as well? All emotionally-laden spontaneous cognitive content should be suspect. Also, when you correct an overly positive self-assessment, do you try to describe it as a growth opportunity? This violates no principles of rationality, and seems like it could mitigate the self-destruction. (See fixed vs. growth theories of intelligence.)
1[anonymous]15y
Are you in the habit of immediately doubting negative thoughts as well? All emotionally-laden spontaneous cognitive content should be suspect. Also, when you correct an overly positive self-assessment, do you try to describe it as a growth opportunity? This violates no principles of rationality, and seems like it could mitigate the self-destruction. (See fixed vs. growth theories of intelligence).
1Nick_Tarleton15y
AFAICT, this means to seek disconfirming evidence, and update if and when you find it. Nothing to do with confidence.
3John_Maxwell15y
Disconfirming evidence makes you less confident that your original beliefs were true.
1Nick_Tarleton15y
If you find it; though this is nitpicking, as the net effect usually will be as you say. Still, this is completely different from the unconditional injunction to be less confident that the post suggests.

How many kittens would you eat to gain 1 point of IQ?

I should eat them for free, since I already pay money to eat pigs.

9SoullessAutomaton15y
Assuming I don't have to kill and clean them myself, and that I am not emotionally attached to any of the animals in question: If the value is not cumulative, the answer is likely zero, because of the social penalties of being known to eat animals categorized as "pets", "cute", and "babies". More than that, contingent on the ability to do so without public knowledge of such and depending on age; likely at most 200 or so, which assumes young animals and that I eat only the muscles and little else until finished (id est, the point at which the utility of a varied diet exceeds that of a point of IQ.) If the value is cumulative with an expected gain of around one point a year, roughly an average of around two pounds of food per day, however many individual animals that works out to be, id est, the point at which the utility of not gaining excess weight exceeds that of gaining IQ, a value which may vary with time. I suspect this comment will go a long way toward convincing others of the accuracy of the first word of my user name...
3dclayh15y
In this crowd? I don't see why. Voluptatis avidus, Magis quam salutis; Mortuus in anima, Curam gero cutis.
3SoullessAutomaton15y
Oh, I do value virtue, to be sure; but I have gradually convinced myself to internalize the value of a moral calculus, and I accept that my judgments may not align with most people's instinctive emotional reactions.
7ialdabaoth10y
Given that 1 point of IQ is 1/15th of a standard deviation, a "point" of IQ isn't necessarily a consistent metric for cognitive function - depending on the shape of the actual population curve used to take the test, the actual performance delta between 125 and 130 may be VASTLY divergent from the performance delta between 145 and 150. I think we need a different shorthand word for "quantified boost in cognitive performance" than "points of IQ". Does anyone have any ideas?
0Lumifer10y
This implies you have another metric for cognitive function which an IQ point does not match. What is that another metric?
4ialdabaoth10y
It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric - they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents "number of questions on a particular IQ test that you got right", and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for "smartness" that actually has real-world consequences.) ETA: I certainly have an intuitive idea for what "smartness" would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests - especially the ones that attempt to avoid linguistic bias - typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures "one average 100 IQ human" worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro's number. NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.
0Lumifer10y
Well, you need some framework. You said that IQ points are not "necessarily a consistent metric for cognitive function". First, what is "cognitive function" and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to? The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences. I don't understand what that means.
2Nornagest10y
Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there's a bunch of fiddly little details (maybe someone's airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation -- and there are still a lot of unanswered questions about the way they relate to each other. In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who's good at math, but it doesn't tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.
0Lumifer10y
g is an unobserved value, a scalar. It cannot say anything about "causes of intelligence" or shapes of curves. It doesn't aim to.
0Nornagest10y
g was observed as a correlation between test scores. That is by definition a scalar value, but we don't know exactly how the underlying mechanism works or how it can be modeled; we just know that it's not very domain-specific. It's the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I'm pretty sure it's what ialdabaoth is referring to as well.
2Lumifer10y
To be more precise, the existence of g was derived from observing the correlation of test scores. Moreover, g itself is not the correlation, it is the unobservable underlying factor which we assume to cause the correlation. It is still a scalar-valued characteristic of a person, not a mechanism.
2ialdabaoth10y
Absolutely, but +n g doesn't necessarily mean +m IQ for all (n,m). Here's a place where my intuition's going to struggle to formulate good words for this. An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A "proper" quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether "Utility" is measured in units similar to Kolmogorov complexity is questionable, but that's what my naive intuition yanked out when grasping for units. But the point is, whatever we actually choose to measure g in, the term "+1 g" should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.
0Lumifer10y
This phrase implies that you have a metric for g (different from IQ points) because without it the expression "+n g" has no meaning. Okay. To be precise we are talking about Shannon entropy and these units are bits. Hold on. What is this Utility thing? I don't see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility? I don't see this as obvious. Why? Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn't a particularly distorting operation to do. It is not fit onto a gaussian distribution.
4Nornagest10y
I'm afraid you're mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It's not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.
0Lumifer10y
Hm. A quick look around finds this which says that raw scores are standardized by forcing them to the mean of 100 and the standard deviation of 15. This is a linear transformation and it does not fit anything to a gaussian distribution. Of course this is just stackexchange -- do you happen to have links to how "proper" IQ test are supposed to convert raw scores into IQ points?
2hyporational10y
If the difficulty of the questions can't be properly quantified, what exactly do the raw scores tell you?
0A1987dM10y
See the first sentence of the penultimate paragraph of this.
5John_Maxwell15y
Lots. This contradicts my revealed preference though I suppose, because I have a vague idea that fish oil increases intelligence but I haven't made a special effort to eat any. I'm trying to anticipate how you'll follow up on this in a way that's relevant to my post and coming up blank.
0Paul Crowley15y
The fish oil thing is quackery I'm afraid I find it hard to be properly scope sensitive about the kittens thing.
1PhilGoetz15y
"Scope sensitive"?
3Paul Crowley15y
referring to "scope insensitivity". I care a lot more about the boundary between eating kittens and not eating kittens than the number of kittens I eat, so the gain I'd need to eat two kittens is less than twice the gain I'd need to eat one kitten. Which indicates that I'm more concerned for myself than for the kittens...
0magfrump12y
I've seen other discussions here regarding varieties of Omega-3s which strongly indicated that fatty acids from fish are used to build brain related cells and that these acids aren't really available in any other foods; casual googling fails to turn anything up but the link you provided seems like the sort of site that might, for instance, dismiss cryonics as quackery so I would like to see further discussion from someone better at researching than I am.
3Carinthium13y
Assuming I somehow found a way to counteract taste-related problems, more then 10. Why value the life of a kitten? EDIT: And given my social situation as autistic, I could get around the resulting problems without too much in the way of trouble.
3thomblake15y
Has this comment really gone entirely without explanation and still been upvoted multiple times? How is this remotely relevant to the post?
2Vladimir_Nesov15y
The post is about a tradeoff between epistemic rationality and instrumental rationality: you shouldn't invest too much effort in precise knowledge, and in some circumstances, humans may find themselves at a disadvantage because of knowing more. The same clash appears in the metaphor where you trade the achievement of goals (not wanting to eat kittens) for precision of knowledge (gaining IQ points).
4thomblake15y
Ah... I think I see now. The comment assumed that one would not want to eat kittens, and that IQ is equivalent or isomorphic to epistemic rationality, and then mapped that to giving up instrumental rationality in favor of epistemic rationality. Definitely could've used some explanation.
1A1987dM10y
I'd guess 1 point of IQ is somewhere around equivalent to the cognitive boost I'd get by sleeping half an hour longer every day, and it'd take quite a few hours for me to earn enough money to buy a single kitten, so... no more than a couple per month, I'd guess. (And that's not even counting slaughtering and cooking the kittens or paying someone to do that.) ;-)

I'm not aware of studies showing that those in the upper 10% overestimate their abilities. Anyone trying to increase their rationality is probably in the upper 10% already.

My recollection is that at least one study showed some regression to the mean in confidence -- highly-skilled people tended to underestimate themselves.

6SoullessAutomaton15y
To the best of my knowledge, this sort of effect is mostly not an effect of underestimation, but rather of either misestimating the skills of others, and/or of restricting the group against which one evaluates oneself. Generally, it seems, if one has no knowledge of others' skills in an area, an estimation of one's own skill level is likely wildly inaccurate; if one's knowledge of others' skills is biased in favor of a non-random group, one's own skill is likely closer to the mean of that group than expected. That is to say, if you're better at X than your friends are, you're probably not as good as you think; if you're jealous of your friends' skill at X you're probably better than you think. On the other hand, if you measure your success against the best and brightest in a field, you're probably wildly underestimating yourself; but if you're proud of being good at something and compare yourself to random people you meet you're probably substantially overestimating yourself. People posting on LW are strongly self-selected in favor of rationality, so anyone reading this is probably closer to average within this community than they think they are!
7Paul Crowley15y
Drivers think they are comparing themselves to the other drivers they see on the road. In fact, they are comparing themselves to the drivers whose skill they have most cause to think about, which is overwhelmingly often the worst drivers on the road.
2SoullessAutomaton15y
Yes, this is the same principle of a biased relative estimate due to comparison against a non-random subset of others.
1Nick_Tarleton15y
This is great information. Do you have a link?
0SoullessAutomaton15y
Sadly, no; I have a poor memory for details and have not generally been in the habit of saving links to my sources, something I need to correct. Chances are it's actually a synthesis of multiple sources, all of which I read a year or more ago. Mea culpa.
1[anonymous]15y
Do you have a link to further information?
5John_Maxwell15y
Upper 10% of what? They might be in the upper 10% of truthseeking, but not the upper 10% of any number of other fields--which confidence could prove useful in.
1PhilGoetz15y
Good point, if we're talking about confidence in abilities in general, and not just in rationality.
[-][anonymous]15y10

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case? Or the one with something to protect?

[-][anonymous]15y10

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case?

(Really, though, it's going to be the one with something to protect.)

I'd say the benefits have to outweigh the costs. If you succeed in achieving your goal despite holding a significant number of false beliefs relevant to this goal, it means you got lucky: Your success wasn't caused by your decisions, but by circumstances that just happened to be right.

That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn't change the fact that luck only favors us a small fraction of the time, by definition.

4pjeby15y
On the contrary: "luck" is a function of confidence in two ways. First, people volunteer more information and assistance to those who are confident about a goal. And second, the confident are more likely to notice useful events and information relative to their goals. Those two things are why people think the "law of attraction" has some sort of mystical power. It just means they're confident and looking for their luck.
4cousin_it15y
As the post hinted, self-deception can give you confidence which is useful in almost all real life situations, from soldier to socialite. Far from "tipping the balance a little bit", a confidence upgrade is likely to improve your life much more than any amount of rationality training (in the current state of our Art).
2Vladimir_Nesov15y
Too vague. It's not clear what is your argument's denotation, but connotation (becoming overconfident is vastly better than trying to be rational) is a strong and dubious assertion that needs more support to move outside the realm of punditry.
3cousin_it15y
IMO John_Maxwell_IV described the benefits of confidence quite well. For the other side see my post where I explicitly asked people what benefit they derive from the OB/LW Art of Rationality in its current state. Sorry to say, there weren't many concrete answers. Comments went mostly along the lines of "well, no tangible benefits for me, but truth-seeking is so wonderful in itself". If you can provide a more convincing answer, please do.
2John_Maxwell15y
People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you're a salesperson but not if you're a general, for instance. I might look like a member of the "always-be-confident" side to all you extreme epistemic rationalists, but I'm not.
4SoullessAutomaton15y
I think a better conclusion is: be confident if you're being evaluated by other people, but cautious if you're being evaluated by reality. A lot of the confusion here seems to be people with more epistemic than instrumental rationality having difficulty with the idea of deliberately deceiving other people.
2John_Maxwell15y
But there is another factor: humans are penalized by themselves for doubt. If they (correctly) estimate their ability as low, they may decide not to try at all, and therefore fail to improve. The doubt's what I'm interested in, not tricking others.
0SoullessAutomaton15y
A valid point! However, I think it is the decision to not try that should be counteracted, not the levels of doubt/confidence. That is, cultivate a healthy degree of hubris--figure out what you can probably do, then aim higher, preferably with a plan that allows a safe fallback if you don't quite make it.
5John_Maxwell15y
If I could just tell myself to do things and then do them exactly how I told myself, my life would be fucking awesome. Planning isn't hard. It's the doing that's hard. Someone could (correctly) estimate their ability as low and rationally give it a try anyway, but I think their effort would be significantly lower than someone who knew they could do something. Edit: I just realized that someone reading the first paragraph might get the idea that I'm morbidly obese or something like that. I don't have any major problems in my life--just big plans that are mostly unrealized.
0SoullessAutomaton15y
You may be correct, and as someone with a persistent procrastination problem I'm in no position to argue with your point. But still, I am hesitant to accept a blatant hack (actual self-deception) over a more elegant solution (finding a way to expend optimal effort while still having a rational evaluation of the likelihood of success). For instance, I believe that another LW commenter, pjeby, has written about the issues related to planning vs. doing on his blog.
2John_Maxwell15y
Yeah, I've read some of pjeby's stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here. (If I had remembered any of the specific tips, I probably would have included them.) If you change your mind and decide to take the self-deception route, I recommend this essay and subsequent essays as steps to indoctrinate yourself.
1pjeby15y
I'm not an epistemical rationalist, I'm an instrumental one. (At least, if I understand those terms correctly.) That is, I'm interested in maps that help me get places, whether they "accurately" reflect the territory or not. Sometimes, having a too-accurate map -- or spending time worrying about how accurate the map is -- is detrimental to actually accomplishing anything.
0SoullessAutomaton15y
As is probably clear, I am an epistemological rationalist in essence, attempting to understand and cultivate instrumental rationality, because epistemological rationality itself forces me to acknowledge that it alone is insufficient, or even detrimental, to accomplishing my goals. Reading Less Wrong, and observing the conflicts between epistemological and instrumental rationality, has ironically driven home the point that one of the keys to success is carefully managing controlled self-deception. I'm not sure yet what the consequences of this will be.
4pjeby15y
It's not really self-deception -- it's selective attention. If you're committed to a course of action, information about possible failure modes is only relevant to the extent that it helps you avoid them. And for the most useful results in life, most failures don't happen so rapidly that you don't get any warning, or so catastrophic as to be uncorrectable afterwards. Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant. In the modern era, though, it's not that common for a minor error to produce severe consequences -- you can always start over someplace else with another group of people. So that's a very good example of an area where more factual information can lead to enhanced confidence. A major difference between the confident and unconfident is that the unconfident focus on "hard evidence" in the past, while the confident focus on "possibility evidence" in the future. When an optimist says "I can", it means, "I am able to develop the capability and will eventually succeed if I persist". Whereas a pessimist may only feel comfortable saying "I can" if they mean, "I have done it before." Neither one of them is being "self-deceptive" -- they are simply selecting different facts to attend to (or placing them in different contexts), resulting in different emotional and motivational responses. "I haven't done this before" may well mean excitement and challenge to the optimist, but self-doubt and fear for the pessimist. (See also fixed vs. growth mindsets.)
1[anonymous]15y
I wish I could upmod you twice for this.
0SoullessAutomaton15y
Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal. But, personally, I'm still holding out for a way to get from the former to the latter without irrevocable compromises.
0pjeby15y
It's easier than you think, in one sense. The part of you that worries about that stuff is significantly separate from -- and to some extent independent of -- the part of you that actually makes you do things. It doesn't matter whether "you" are only 20% certain about the result as long as you convince the doing part that you're 100% certain you're going to be doing it. Doing that merely requires that you 1) actually communicate with the doing part (often a non-trivial learning process for intellectuals such as ourselves), and 2) actually take the time to do the relevant process(es) each time it's relevant, rather than skipping it because "you already know". Number 2, unfortunately, means that akrasia is quasi-recursive. It's not enough to have a procedure for overcoming it, you must also overcome your inertia against applying that procedure on a regular basis. (Or at least, I have not yet discovered any second-order techniques to get myself or anyone else to consistently apply the first-order techniques... but hmmm... what if I applied a first-order technique to the second-order domain? Hmm.... must conduct experiments...)
0pjeby15y
An excellent heuristic, indeed!
2Annoyance15y
It depends on the cost of overconfidence. Nothing ventured, nothing gained. But if the expected cost of venturing wrongly is greater than the expected return, it's better to be careful what you attempt. If the potential loss is great enough, cautiousness is a virtue. If there's little investment to lose, cautiousness is a vice.
0John_Maxwell15y
Right.
3John_Maxwell15y
OK, I see you don't believe me that you should sometimes accept and sometimes reject epistemic rationality for a price. So here's a simple mathematical model: Let's say agent A accepts the offer of increased epistemic rationality for a price, and agent N has not accepted it. P is the probability A will decide differently than N. F(A or N) is the expected value of N's original course of action as a function of the agent who takes it, while S(A) is the expected value of the course of action that A might switch to. If there is a cost C associated with becoming agent A, then agent N should become agent A if and only if (1 - P) F(A) + P S(A) - C >= F(N) The left side of the equation is not bigger than the right side "by definition"; it depends on the circumstance. Eliezer's dessert-ordering example is a situation where the above inequality does not hold. If you complain that agent N can't possibly know all the variables in the equation, then I agree with you. He will be estimating them somewhat poorly. However, that complaint in no way supports the view that the left side is in fact bigger. Someone once said that "Anything you need to quantify can be measured in some way that is superior to not measuring it at all." Just like the difficulty of measuring utility is not a valid objection to utilitarianism, the difficulty of guessing what a better-informed self would do is not a valid objection to using this equation. That's a funny definition of "luck" you're using.
0Furcas15y
Yes, the right side can be bigger, and occasionally it will be. If you get lucky. If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side. It is? Why do you think people are pleasantly surprised when they get lucky, if not because it's a rare occurrence?
3John_Maxwell15y
Not quite. * The information could be of high relevance, but it could so happen that it won't cause him to change his mind. * He could be choosing among close alternatives, so switching to a slightly better alternative could be of limited value. * Remember also that failure to search for disconfirming evidence doesn't necessarily constitute self-deception. Sorry, I guess your definition of luck was reasonable. But in this case, it's not necessarily true that the probability of the right side being greater is lower than 50%. In which case you wouldn't always have to "get lucky".
3Furcas15y
I've been thinking about this on and off for an hour, and I've come to the conclusion that you're right. My mistake comes from the fact that the examples I was using to think about this were all examples where one has low certainty about whether the information is irrelevant to one's decision making. In this case, the odds are that being ignorant will yield a less than maximal chance of success. However, there are situations in which it's possible to know with great certainty that some piece of information is irrelevant to one's decision making, even if you don't know what the information is. These situations are mostly those that are limited in scope and involve a short-term goal, like giving a favorable first impression, or making a good speech. For instance, you might suspect that your audience hates your guts, and knowing that this is in fact the case would make you less confident during your speech than merely suspecting it, so you'd be better off waiting after the speech to find out about this particular fact. Although, if I were in that situation, and they did hate my guts, I'd rather know about it and find a way to remain confident that doesn't involve willful ignorance. That said, I have no difficulty imagining a person who is simply incapable of finding such a way. I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?
3John_Maxwell15y
Wow, this must be like the 3rd time that someone on the internet has said that to me! Thanks! If you think of a way, please tell me about it. Information you have to pay money for doesn't fit into this category.