My epistemic framework has recently undergone some major shifts, and I believe that my current epistemic framework is better than my previous one. In the past, I tended to try to discover and rely on a single relatively strong argument in favor or against a position. Since then, I’ve come to the conclusion that I should shift my focus toward discovering and relying on many independent weak arguments. In this post, I attempt to explain why. After I posted this article, I got lots of comments in response, and responded to them in this discussion post.
My previous reliance on an individual relatively strong argument
I’m a mathematician by training, and by inclination. In the past, I tried to achieve as much certainty as possible when I'd evaluate an important question.
An example: Something that I’ve thought a lot about is AI risk reduction effort as a target for effective philanthropy. In the past, I attempted to discover a single relatively strong argument for, or against, focus on AI risk reduction. Such an argument requires a number of inputs. An example of an input is an argument as to what kind of AI one should expect to be built by default. I spent a lot of time thinking about this and talking with people about it. What I found was that my views on the question were quite unstable, altering frequently and substantially in response to incoming evidence.
The phenomenon of [my position altering frequently and substantially in response to incoming evidence] was not limited to AI risk. It was characteristic of much of my thinking about important questions that could not be answered with clear-cut evidence. I recognized this as bad, but felt that I had no choice in the matter — I didn’t see another way to think about such questions, and I thought that some such questions are sufficiently important so as to warrant focus. My hope was that my views on these questions would gradually stabilize, but this didn’t happen with the passage of time.
An alternative — reliance on many weak independent arguments
While my views on various questions were bouncing around, I started to notice that some people seemed to be systematically better at answering questions that could not be answered with clear-cut evidence, in the sense that new data supported their prior views more often than new data supported my own prior views.
This puzzled me, as I hadn’t thought that it was possible to form such reliable views on these sorts of questions with the evidence that was available. I noticed that these people didn’t seem to be using my epistemic framework, and I was unclear on what epistemic framework they were using.They didn't seem to be trying to discover a relatively strong argument.
They sometimes gave weak arguments that seemed to me to be a product of the fundamental cognitive bias described in Eliezer's article The Halo Effect and Yvain's articles The Trouble with "Good" and Missing the Trees for the Forest. When a member of a reference class has a given feature, by default, we tend to assume that all members of the reference class have the same feature. Some of the arguments seemed to me sufficiently weak so that they should be ignored, and I didn't understand why they were being mentioned at all.
What I gradually came to realize is that these people were relying on many independent weak arguments. If the weak arguments collectively supported a position, that’s the position that they would take. They were using the principle of consilience to good effect, obtaining a better predictive model than my own.
Many independent weak arguments: a case study
For concreteness, I’ll give an example of a claim that I believe to be true with high probability, despite the fact each individual argument that supports it is weak.
Claim: At the current margin, on average, majoring in a quantitative subject increases people’s expected earnings relative to majoring in other subjects.
The following weak arguments support this claim:
Weak argument 1: Historically, there’s been a correlation between majoring in a quantitative subject and making more money. Examining the table in a blog post by Bryan Caplan reveals that the common majors that are most strongly associated with high earnings are electrical engineering, computer science, mechanical engineering, finance, economics, accounting, and mathematics, each of which is a quantitative major.
Weak argument 2: Outside of medicine, law, and management, the most salient jobs that offer the high earnings are finance and software engineering, both of which require quantitative skills. Majoring in a quantitative major builds quantitative skills, and so qualifies one for these jobs.
Weak argument 3: Majoring in a subject with an abundance of intelligent people signals to employers that one is intelligent. IQ estimates by college major suggest that the majors with highest average IQ are physics, philosophy, math, economics, and engineering, most of which are quantitative majors. So majoring in a quantitative field signals intelligence. And employers want intelligent employees, so majoring in a quantitative subject increases earnings.
Weak argument 4: Studying a quantitative subject offers better opportunities to test one’s beliefs against the world than studying the humanities and social sciences does, because the measures of performance in quantitative subjects are more objective than those in humanities and social sciences. Thus, studying a quantitative subject raises one’s general human capital relative to what it would have been if one studied a softer subject.
Weak argument 5: Conventional wisdom is that majoring in a quantitative subject increases one’s expected earnings. If there were strong arguments against the claim, one might expect them to percolate into conventional wisdom, which they haven't. In absence of evidence to the contrary, one should default to conventional wisdom.
Weak argument 6: I know many smart people who enjoy thinking, and who themselves know other many smart people who enjoy thinking. As Yvain discussed in Intellectual Hipsters and Meta-Contrarianism, smart people who enjoy thinking are often motivated to adopt and argue for positions opposed to conventional wisdom, in order to counter-signal intelligence. If the conventional wisdom concerning the subject at hand were wrong, one might expect some of the people who I know to have argued against it, and I’ve never heard them do so.
To verify that these arguments are in fact weak, I’ll give counterarguments against them:
Counterarguments to 1: Correlation is not causation. The people who major in quantitative subjects may make more money later on because they have higher innate ability, or because they have better connections on account of having grown up in households with higher socio-economic status, or for some other nonobvious reason.
Counterarguments to 2: It could be that one only needs to have high school level quantitative knowledge in order to succeed in these jobs.
Majoring in a quantitative field could reduce one’s ability to go to medical school or law school later on (e.g. on account of grading being more strict in quantitative subjects, and medical and law schools selecting students by GPA).
Counterarguments to 3: Potential employees may have other ways of signaling intelligence, so that college major is not so important. As above, majoring in a quantitative subject may lower GPA, resulting in sending a signal of low quality.
Counterarguments to 4: It could be that earnings don’t depend very much on one’s intellectual caliber. For example, maybe social connections matter more than intellectual caliber, so that one should focus on developing social connections. The heavy workload of a quantitative major could hinder this.
Counterarguments to 5: Conventional wisdom is often wrong. Conventional wisdom on this subject is likely rooted in the correlation between majoring in a quantitative subject and having higher earnings, and as discussed in the counterarguments to 1, correlational evidence is weak.
Counterarguments to 6: There are many, many issues on which one can adopt a meta-contrarian position, and meta-contrarians only discuss a few of these, because there are so many of them. Also, “Smart people who like to think” could, for some unknown reason, collectively be motivated to believe the claim.
In view of these counterarguments, how can one be confident in the claim?
First off, I’ll remark that the counterarguments don’t suffice to refute the individual arguments, because the counterarguments aren’t strong, and there are counterarguments against them.
But there are counterarguments to the counterarguments as well. In view of this, one might resign oneself to a position of the type “it may or may not be the case that the claim is true, and it’s hopeless to decide whether or not it is.” Eight years ago, this was how I viewed most claims concerning the human world. In Yvain's words, I was experiencing epistemic learned helplessness.
It’s not uncommon for mathematicians to hold this position on claims concerning the human world. Of course there are instances of mathematicians using several lines of evidence to arrive at a conclusion in absence of a rigorous proof. But the human world is much messier and more ambiguous than the mathematical world. The great mathematician Carl Friedrich Gauss wrote
There are problems to whose solution I would attach infinitely greater importance than to those of mathematics, for example touching ethics, or our relation to God, or concerning our destiny and our future; but their solution lies wholly beyond us and completely outside the province of science.
Gauss's quotation doesn't directly refer to prosaic epistemic questions about the human world, but one could imagine him having such a view toward these questions, and even if not, I've heard a number of mathematicians express such a view on questions that cannot be answered with clear-cut evidence.
This not withstanding, my current position is that one can be confident in the claim, not with extremely high confidence (say, the level of confidence that Euler had in the truth of the product formula for the sine function), but with confidence at the ~90% level, which is high enough to be actionable.
Why? The point is that the arguments in favor of the claim are, like Euler’s arguments, largely independent of one another. This corresponds to the fact that the counterarguments are ad hoc and un-unified. The situation is analogous to Carl Sagan’s “Dragon in My Garage” parable. In order to refute all of the arguments via the counterarguments, one needs to assume that all the counterarguments succeed (or other counterarguments succeed), and the counterarguments are pretty independent. If one assumes that for each argument, the counterarguments overpower the argument with probability 50%, and the counterarguments’ successes are independent, the probability that they all succeed is ~1.5%.
The counterarguments are not independent — for example, the point about majoring in a quantitative subject lowering GPA appears twice. So I don’t think that one can be too confident in the conclusion. But the existence of many independent weak arguments suffices to rescue us from epistemic paralysis, and yield an actionable conclusion.
The “single relatively strong argument” approach to the claim in the case study above
The “single relatively strong argument” approach to assessing the above claim is to try to synthesize as many of the above weak arguments and counterarguments as possible, into a single relatively strong argument.
[Added: Kawoomba's comment realize that the above sentence wasn't clear. The point is that in focusing on a single strong argument to the exclusion of other arguments, one is implicitly rejecting the weak arguments, and so doing so constitutes an implicit attempt to synthesize the evidence. The sort of thing that I have in mind here is to say "Correlation is not causation. Conventional wisdom is probably rooted in mistaking correlation for causation. Therefore we should ignore conventional wisdom in formulating our relatively strong argument."]
If I were to try to do this, it would look something like this:
Based on what people and employers say, it appears that many of the high paying jobs in our society require some quantitative skills. It’s unclear how much quantitative skill one needs to do these jobs. But presumably one needs some.
People who are below this threshold may be able to surpass it by majoring in a quantitative subject, and thereby get higher earnings.
Even if one does surpass this threshold, majoring in a quantitative subject may not suffice to signal to employers that one is above that threshold, if the noise to signal ratio is high. But it may not be necessary to get a job that requires quantitative skills right out of college, in order to get high earnings from building quantitative skills in college — it might be possible for an employee to “work his or her way up” to a position that uses quantitative skills, and profit as a result.
It might appear as though people who are already above this threshold wouldn’t get higher earnings from majoring in a quantitative subject. But employers may not be able to tell that potential employees have quantitative skills unless they major a quantitative subject. (Note that if this is true, it suggests that the concern in the previous paragraph is less of an issue. However, it could still be an issue, because different levels of quantitative skills are required to get different jobs, so that the level that employees need to signal is not homogenous). This pushes in favor of majoring in a quantitative subject. People above the threshold may also benefit in majoring in a quantitative subject because it signals intelligence, which is considered to be desirable, independently of the specific quantitative skills that a potential employee has acquired.
It’s necessary to weigh these considerations against the fact that quantitative majors tend to be demanding, leaving less time for other activities, and are harder to get good grades in. Thus, majoring in a quantitative subject involves a tradeoff, the value of which will vary from individual to individual, depending on his or her skills, potential areas of work, and the criteria that graduate schools and employers use to select employees.
Major weaknesses of the “single relatively strong argument” approach
The above argument has some value, and I imagine that a college freshman would find it somewhat useful. But it seems less helpful than the list of weak arguments, together with the most important counterarguments, given earlier in this post. The argument in the previous section doesn’t clearly demarcate the different lines of evidence, and inadvertently leaves out some of the lines of evidence (because some of the lines of evidence don’t easily fit into a single framework).
These problems with using the “single relatively strong argument” approach are closely related to my past unstable epistemology. Because the “single relatively strong argument” approach doesn’t clearly demarcate the different lines of evidence, when a user of the approach gets new counter-evidence that’s orthogonal to the argument, he or she has to rethink the entire argument. Because the “single relatively strong argument” approach leaves out some lines of evidence, it’s less robust than it could be.
A priori, one could imagine that these things wouldn’t be a problem in practice: if the relatively strong argument were true with sufficiently high probability, then it would be unlikely that one would have to completely rethink things in the face of incoming evidence, and it wouldn’t be so important that the argument doesn't incorporate all of the evidence.
My experience is that this situation does not prevail in practice. One theoretical explanation for this is analogous to a point that I made in my post Robustness of Cost-Effectiveness Estimates and Philanthropy:
A key point that I had missed when I thought about these things earlier in my life is that there are many small probability failure modes, which are not significant individually, but which collectively substantially reduce [the probability that the argument is correct]. When I encountered such a potential failure mode, my reaction was to think “this is very unlikely to be an issue” and then to forget about it. I didn’t notice that I was doing this many times in a row.
This applies not only to cost-effectiveness, but also to the accuracy of individual relatively strong arguments. Relatively strong arguments in domains outside of math and the hard scientists are often much weaker than they appear. The phenomenon of model uncertainty is pronounced.
The points in this section of the post are in consonance with a claim of Philip Tetlock’s in Expert Political Judgment: How Good Is It? How Can We Know?:
Tetlock contends that the fox — the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events — is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems.
A sample implication: a change in my attitude toward Penrose's beliefs about consciousness
An example that highlights my shift in epistemology is the shift in my attitude concerning Roger Penrose’s beliefs about consciousness.
[Edit: Eliezer's comment and Vaniver's comment made me realize that the connection between this example and the rest of my post is unclear. The shift in my attitude toward Penrose's beliefs about consciousness isn't coming from my shift toward using the principle of consilience. I agree that the arrow of consilience points against Penrose's beliefs. The shift in my attitude is coming from the shift from "give weight to arguments that stand up to scrutiny" to "give weight to all arguments with a nontrivial chance of being right, even the ones that don't seem to hold up to scrutiny."]
In The Emperor's New Mind (1989), he argues that known laws of physics are inadequate to explain the phenomenon of consciousness. Penrose proposes the characteristics this new physics may have and specifies the requirements for a bridge between classical and quantum mechanics (what he calls correct quantum gravity). […] Penrose believes that such deterministic yet non-algorithmic processes may come into play in the quantum mechanical wave function reduction, and may be harnessed by the brain. He argues that the present computer is unable to have intelligence because it is an algorithmically deterministic system. He argues against the viewpoint that the rational processes of the mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer. This contrasts with supporters of strong artificial intelligence, who contend that thought can be simulated algorithmically. He bases this on claims that consciousness transcends formal logic because things such as the insolubility of the halting problem and Gödel's incompleteness theorem prevent an algorithmically based system of logic from reproducing such traits of human intelligence as mathematical insight.
I believe that Penrose’s views about consciousness are very unlikely to be true:
- I subscribe to reductionism, and I don't think that a present computer is unable to have intelligence, according to any reasonable definition of intelligence.
- This invocation of Godel’s incompleteness theorem seems to be a non-sequitur, and has been criticized by many mathematicians.
- Max Tegmark did a calculation calling into question the physics part of Penrose’s argument.
- I don’t know anybody who shares Penrose’s view on consciousness, and the fraction of all scientists who agree with Penrose’s view appears to be tiny.
But Penrose isn’t a random crank. Penrose is one of the greatest physicists of the second half of the 20th century. He’s a far deeper thinker than me, and for that matter, a far deeper thinker than anybody who I’ve ever met.
I have several relatively strong arguments against Penrose's views on consciousness. Collectively, they’re significantly stronger than the moderately strong argument “great physicists are often right.” In the past, I would have concluded “…therefore Penrose is wrong.”
But it’s not rational to ignore the moderately strong argument that supports Penrose’s views. The chance of the argument being right is non-negligible. I should give nontrivial credence to Penrose’s views on consciousness having substance. Maybe at least some of Penrose’s ideas about consciousness are sound, and that the reason that they seem tenuous is that he’s expressed his ideas poorly, or they've been misquoted. Maybe there's some other way to reconcile the hypothesis that his views are sound, with the evidence against this, that I haven't thought of.
If I were using my previous epistemic framework, my world view could be turned upside down by a single conversation with Penrose. If I were using my previous epistemological framework, I would be subject to confirmation bias, using my conclusion “…therefore Penrose is wrong” as overly strong evidence against the claim “great physicists are often right,” which I was unwarrantedly ignoring from the outset.
End notes
Retrospectively, it makes sense that there are people who are substantially better than I had been at reasoning about questions that I thought inherently near-impossible to think about.
Acknowledgements: I thank Luke Muehlhauser, Vipul Naik, Nick Beckstead, and Laurens Gunnarsen for useful suggestions for what to include in the post, as well as helpful comments on an earlier draft. I'm indebted to and grateful to Holden Karnofsky at GiveWell for his insights, as well as GiveWell, which offered me the opportunity to think about hard epistemic questions that can't be answered with clear-cut evidence. Both of these helped me recognize the core thesis of this post.
Note: I formerly worked as a research analyst at GiveWell. All views expressed here are my own.
It's not at all clear to me why this is the case. The argument you give, as I understand it, is "weak arguments, if independent, add nonlinearly instead of linearly, and so we can't safely ignore weak arguments."* But in the case of Penrose, you have a weak argument in his favor (he's really clever), and many strong arguments against him, of which several are independent. The arrow of consilience points against Penrose, and so you should update against Penrose if you've gained a new respect for consilience.
*The argument that we shouldn't ignore arguments because they are below some evidence threshold, to me, falls under "proper epistemic hygiene" and so doesn't seem novel or need to be justified.
It appears that I didn't express myself clearly as well as I would have liked. Thanks for pointing this issue out.
My current epistemological framework is "give weight to all arguments, even the (non-negligibly) weak ones." My prior epistemological framework had been "give weight to all arguments that stand up to scrutiny." I agree that the arrow of consilience points against Penrose. My update is coming from the change "give weight to arguments that don't stand up to scrutiny."
I added an edit to my post explaining this.
I don't... (read more)