Introduction
[I have edited the introduction of this post for increased clarity.]
This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:
Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.
The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.
I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.
I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.
In the rest of this post I will:
- Outline the framework and offer guidelines for applying it effectively. I explain why I favor relying on the epistemic standards of people who are trustworthy by clear indicators that many people would accept, why I favor paying more attention to what people think than why they say they think it (on the margin), and why I favor stress-testing critical assumptions by attempting to convince a broad coalition of trustworthy people to accept them.
- Offer some considerations in favor of using the framework.
- Respond to the objection that common sense is often wrong, the objection that the most successful people are very unconventional, and objections of the form “elite common sense is wrong about X and can’t be talked out of it.”
- Discuss some limitations of the framework and some areas where it might be further developed. I suspect it is weakest in cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.
An outline of the framework and some guidelines for applying it effectively
My suggestion is to use elite common sense as a prior rather than the standards of reasoning that come most naturally to you personally. The three main steps for doing this are:
- Try to find out what people who are trustworthy by clear indicators that many people would accept believe about the issue.
- Identify the information and analysis you can bring to bear on the issue.
- Try to find out what elite common sense would make of this information and analysis, and adopt a similar perspective.
On the first step, people often have an instinctive sense of what others think, though you should beware the false consensus effect. If you don’t know what other opinions are out there, you can ask some friends or search the internet. In my experience, regular people often have similar opinions to very smart people on many issues, but are much worse at articulating considerations for and against their views. This may be because many people copy the opinions of the most trustworthy people.
I favor giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally. This guideline is intended to help avoid parochialism and increase self-skepticism. Individual people have a variety of biases and blind spots that are hard for them to recognize. Some of these biases and blind spots—like the ones studied in cognitive science—may affect almost everyone, but others are idiosyncratic—like biases and blind spots we inherit from our families, friends, business networks, schools, political groups, and religious communities. It is plausible that combining independent perspectives can help idiosyncratic errors wash out.
In order for the errors to wash out, it is important to rely on the standards of people who are trustworthy by clear indicators that many people would accept rather than the standards of people that seem trustworthy to you personally. Why? The people who seem most impressive to us personally are often people who have similar strengths and weaknesses to ourselves, and similar biases and blind spots. For example, I suspect that academics and people who specialize in using a lot of explicit reasoning have a different set of strengths and weaknesses from people who rely more on implicit reasoning, and people who rely primarily on many weak arguments have a different set of strengths and weaknesses from people who rely more on one relatively strong line of argument.
Some good indicators of general trustworthiness might include: IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions. I am less committed to any particular list of indicators than the general idea.
Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy). Or there may be no putative experts on a question. In cases where elite common sense gives less weight to the opinions of putative experts or there are no plausible candidates for expertise, it becomes more relevant to think about what elite common sense would say about a question.
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs. If I only included, say, the 20 smartest people I had ever met as judged by me personally, that would probably be too small a number of people, the people would probably have biases and blind spots very similar to mine, and I would miss out on some of the most trustworthy people, but it would be a pretty trustworthy collection of people and I’d have some reasonable sense of what they would say about various issues. If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
I can’t give any very precise answer to the question about whose opinions should be given significant weight, even in my own case. Luckily, I think the output of this framework is usually not very sensitive to how we answer this question, partly because most people would typically defer to other, more trustworthy people. If you want a rough guideline that I think many people who read this post could apply, I would recommend focusing on, say, the opinions of the top 10% of people who got Ivy-League-equivalent educations (note that I didn’t get such an education, at least as an undergrad, though I think you should give weight to my opinion; I’m just giving a rough guideline that I think works reasonably well in practice). You might give some additional weight to more accomplished people in cases where you have a grip on how they think.
I don’t have a settled opinion about how to aggregate the opinions of elite common sense. I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. For the purpose of making decisions, I think that sophisticated voting methods (such as the Condorcet method) and analogues of the parliamentary approaches outlined by Nick Bostrom and Toby Ord seem fairly promising as rough guidelines in the short run. I don’t do calculations with this framework—as I said, it’s mostly conceptual—so uncertainty about an aggregation procedure hasn’t been a major issue for me.
On the margin, I favor paying more attention to people’s opinions than their explicitly stated reasons for their opinions. Why? One reason is that I believe people can have highly adaptive opinions and patterns of reasoning without being able to articulate good defenses of those opinions and/or patterns of reasoning. (Luke Muehlhauser has discussed some related points here.) One reason is that people can adopt practices that are successful without knowing why they are successful, others who interact with them can adopt those practices, others who interact with them can adopt those practices, and so forth. I heard an extreme example of this from Spencer Greenberg, who had read it in Scientists Greater than Einstein. The story involved a folk remedy for visual impairment:
There were folk remedies worthy of study as well. One widely used in Java on children with either night blindness or Bitot’s spots consisted of dropping the juices of lightly roasted lamb’s liver into the eyes of affected children. Sommer relates, “We were bemused at the appropriateness of this technique and wondered how it could possibly be effective. We, therefore, attended several treatment sessions, which were conducted exactly as the villagers had described, except for one small addition—rather than discarding the remaining organ, they fed it to the affected child. For some unknown reason this was never considered part of the therapy itself.” Sommer and his associates were bemused, but now understood why the folk remedy had persisted through the centuries. Liver, being the organ where vitamin A is stored in a lamb or any other animal, is the best food to eat to obtain vitamin A. (p. 14)
Another striking example is bedtime prayer. In many Christian traditions I am aware of, it is common to pray before going to sleep. And in the tradition I was raised in, the main components of prayer were listing things you were grateful for, asking for forgiveness for all the mistakes you made that day and thinking about what you would do to avoid similar mistakes in the future, and asking God for things. Christians might say the point of this is that it is a duty to God, that repentance is a requirement for entry to heaven, or that asking God for things makes God more likely to intervene and create miracles. However, I think these activities are reasonable for different reasons: gratitude journals are great, reflecting on mistakes is a great way to learn and overcome weaknesses, and it is a good idea to get clear about what you really want out of life in the short-term and the long-term.
Another reason I have this view is that if someone has an effective but different intellectual style from you, it’s possible that your biases and blind spots will prevent you from appreciating their points that have significant merit. If you partly give weight to opinions independently of how good the arguments seem to you personally, this can be less of an issue for you. Jonah Sinick described a striking reason this might happen in Many Weak Arguments and the Typical Mind:
We should pay more attention to people’s bottom line than to their stated reasons — If most high functioning people aren’t relying heavily on any one of the arguments that they give, if a typical high functioning person responds to a query of the type “Why do you think X?” by saying “I believe X because of argument Y” we shouldn’t conclude that the person believes argument Y with high probability. Rather, we should assume that argument Y is one of many arguments that they believe with low confidence, most of which they’re not expressing, and we should focus on their belief in X instead of argument Y. [emphasis his]
This idea interacts in a complementary way to Luke Muehlhauser’s claim that some people who are not skilled at explicit rationality may be skilled in tacit rationality, allowing them to be successful at making many types of important decisions. If we are interacting with such people, we should give significant weight to their opinions independently of their stated reasons.
A counterpoint to my claim that, on the margin, we should give more weight to others’ conclusions and less to their reasoning is that some very impressive people disagree. For example, Ray Dalio is the founder of Bridgewater, which, at least as of 2011, was the world’s largest hedge fund. He explicitly disagrees with my claim:
“I stress-tested my opinions by having the smartest people I could find challenge them so I could find out where I was wrong. I never cared much about others’ conclusions—only for the reasoning that led to these conclusions. That reasoning had to make sense to me. Through this process, I improved my chances of being right, and I learned a lot from a lot of great people.” (p. 7 of Principles by Ray Dalio)
I suspect that getting the reasoning to make sense to him was important because it helped him to get better in touch with elite common sense, and also because reasoning is more important when dealing with very formidable people, as I suspect Dalio did and does. I also think that for the some of the highest functioning people who are most in touch with elite common sense, it may make more sense to give more weight to reasoning than conclusions.
The elite common sense framework favors testing unconventional views by seeing if you can convince a broad coalition of impressive people that your views are true. If you can do this, it is often good evidence that your views are supported by elite common sense standards. If you can’t, it’s often good evidence that your views can’t be so supported. Obviously, these are rules of thumb and we should restrict our attention to cases where you are persuading people by rational means, in contrast with using rhetorical techniques that exploit human biases. There are also some interesting cases where, for one reason or another, people are unwilling to hear your case or think about your case rationally, and applying this guideline to these cases is tricky.
Importantly, I don’t think cases where elite common sense is biased are typically an exception to this rule. In my experience, I have very little difficulty convincing people that some genuine bias, such as scope insensitivity, really is biasing their judgment. And if the bias really is critical to the disagreement, I think it will be a case where you can convince elite common sense of your position. Other cases, such as deeply entrenched religious and political views, may be more of an exception, and I will discuss the case of religious views more in a later section.
The distinction between convincing and “beating in an argument” is important for applying this principle. It is much easier to tell whether you convinced someone than it is to tell whether you beat them in an argument. Often, both parties think they won. In addition, sometimes it is rational not to update much in favor of a view if an advocate for that view beats you in an argument.
In support of this claim, consider what would happen if the world’s smartest creationist debated some fairly ordinary evolution-believing high school student. The student would be destroyed in argument, but the student should not reject evolution, and I suspect he should hardly update at all. Why not? The student should know that there are people out there in the world who could destroy him on either side of this argument, and his personal ability to respond to arguments is not very relevant. What should be most relevant to this student is the distribution of opinion among people who are most trustworthy, not his personal response to small sample of the available evidence. Even if you genuinely are beating people in arguments, there is a risk that you will be like this creationist debater.
An additional consideration is that certain beliefs and practices may be reasonable and adopted for reasons that are not accessible to people who have adopted those beliefs and practices, as illustrated with the examples of the liver ritual and bedtime prayer. You might be able to “beat” some Christian in an argument about the merits of bedtime prayer, but praying may still be better than not praying. (I think it would be better still to introduce a different routine that serves similar functions—this is something I have done in my own life—but the Christian may be doing better than you on this issue if you don’t have a replacement routine yourself.)
Under the elite common sense framework, the question is not “how reliable is elite common sense?” but “how reliable is elite common sense compared to me?” Suppose I learn that, actually, people are much worse at pricing derivatives than I previously believed. For the sake of argument suppose this was a lesson of the 2008 financial crisis (for the purposes of this argument, it doesn't matter whether this is actually a correct lesson of the crisis). This information does not favor relying more on my own judgment unless I have reason to think that the bias applies less to me than the rest of the derivatives market. By analogy, it is not acceptable to say, “People are really bad at thinking about philosophy. So I am going to give less weight to their judgments about philosophy (psst…and more weight to my personal hunches and the hunches of people I personally find impressive).” This is only OK if you have evidence that your personal hunches and the hunches of the people you personally find impressive are better than elite common sense, with respect to philosophy. In contrast, it might be acceptable to say, “People are very bad at thinking about the consequences of agricultural subsidies in comparison with economists, and most trustworthy people would agree with this if they had my evidence. And I have an unusual amount of information about what economists think. So my opinion gets more weight than elite common sense in this case.” Whether this ultimately is acceptable to say would depend on how good elites are at thinking about the consequences of agricultural subsidies—I suspect they are actually pretty good at it—but this is isn’t relevant to the general point that I’m making. The general point is that this is one potentially correct form of an argument that your opinion is better than the current stance of elite common sense.
This is partly a semantic issue, but I count the above example as a case where “you are more reliable than elite common sense,” even though, in some sense, you are relying on expert opinion rather than your own. But you have different beliefs about who is a relevant expert or what experts say than common sense does, and in this sense you are relying on your own opinion.
I favor giving more weight to common sense judgments in cases where people are trying to have accurate views. For example, I think people don’t try very hard to have correct political, religious, and philosophical views, but they do try to have correct views about how to do their job properly, how to keep their families happy, and how to impress their friends. In general, I expect people to try to have more accurate views in cases where it is in their present interests to have more accurate views. (A quick reference for this point is here.) This means that I expect them to strive more for accuracy in decision-relevant cases, cases where the cost of being wrong is high, and cases where striving for more accuracy can be expected to yield more accuracy, though not necessarily in cases where the risks and rewards are won’t come for a very long time. I suspect this is part of what explains why people can be skilled in tacit rationality but not explicit rationality.
As I said above, what’s critical is not how reliable elite common sense is but how reliable you are in comparison with elite common sense. So it only makes sense to give more weight to your views when learning that others aren’t trying to be correct if you have compelling evidence that you are trying to be correct. Ideally, this evidence would be compelling to a broad class of trustworthy people and not just compelling to you personally.
Some further reasons to think that the framework is likely to be helpful
In explaining the framework and outlining guidelines for applying it, I have given some reasons to expect this framework to be helpful. Here are some more weak arguments in favor of my view:
- Some studies I haven’t personally reviewed closely claim that combinations of expert forecasts are hard to beat. For instance, a review by (Clemen 1989) found that: "Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts." (abstract) And a recent work by the Good Judgment Project found that taking an average individual forecasts and transforming it away from .5 credence gave the lowest errors of a variety of different methods of aggregating judgments of forecasters (p. 42).
- There are plausible philosophical considerations suggesting that, absent special evidence, there is no compelling reason to favor your own epistemic standards over the epistemic standards that others use.
- In practice, we are extremely reliant on conventional wisdom for almost everything we believe that isn’t very closely related to our personal experience, and single individuals working in isolation have extremely limited ability to manipulate their environment in comparison with individuals who can build on the insights of others. To see this point, consider that a small group of very intelligent humans detached from all cultures wouldn’t have much of an advantage at all over other animal species in competition for resources, but humans are increasingly dominating the biosphere. A great deal of this must be chalked up to cultural accumulation of highly adaptive concepts, ideas, and procedures that no individual could develop on their own. I see trying to rely on elite common sense as highly continuous with this successful endeavor.
- Highly adaptive practices and assumptions are more likely to get copied and spread, and these practices and assumptions often work because they help you to be right. If you use elite common sense as a prior, you’ll be more likely to be working with more adaptive practices and assumptions.
- Some successful processes for finding valuable information, such as PageRank and Quora, seem analogous to the framework I have outlined. PageRank is one algorithm that Google uses to decide how high different pages should be in searches, which is implicitly a way of ranking high-quality information. I’m speaking about something I don’t know very well, but my rough understanding is that PageRank gives pages more votes when more pages link to them, and votes from a page get more weight if that page itself has a lot of votes. This seems analogous to relying on elite common sense because information sources are favored when they are regarded as high quality by a broad coalition of other information sources. Quora seems analogous because it favors answers to questions that many people regard as good.
- I’m going to go look at the first three questions I can find on Quora. I predict that I would prefer the answers that elite common sense would give to these questions to what ordinary common sense would say, and also that I would prefer elite common sense’s answers to these questions to my own except in cases where I have strong inside information/analysis. Results: 1st question: weakly prefer elite common sense, don’t have much special information. 2nd question: prefer elite common sense, don’t have much special information. 3rd question: prefer elite common sense, don’t have much special information. Note that I skipped a question because it was a matter of taste. This went essentially the way I predicted it to go.
- The type of mathematical considerations underlying Condorcet’s Jury Theorem give us some reason to think that combined opinions are often more reliable than individual opinions, even though the assumptions underlying this theorem are far from totally correct.
- There’s a general cluster of social science findings that goes under the heading “wisdom of crowds” and suggests that aggregating opinions across people outperforms individual opinions in many contexts.
- Some rough “marketplace of ideas” arguments suggest that the best ideas will often become part of elite common sense. When claims are decision-relevant, people pay if they have dumb beliefs and benefit if they have smart beliefs. When claims aren’t decision-relevant, people sometimes pay a social cost for saying dumb things and get social benefits for saying things that are smarter, and the people with more information have more incentive to speak. For analogous reasons, when people use and promote epistemic standards that are dumb, they pay costs and when they use and promote epistemic standards that are smart. Obviously there are many other factors, including ones that point in different directions, but there is some kind of positive force here.
Cases where people often don’t follow the framework but I think they should
I have seen a variety of cases where I believe people don’t follow the principles I advocate. There are certain types of errors that I think many ordinary people make and others that are more common for sophisticated people to make. Most of these boil down to giving too much weight to personal judgments, giving too much weight to people who are impressive to you personally but not impressive by clear and uncontroversial standards, or not putting enough weight on what elite common sense has to say.
Giving too much weight to the opinions of people like you: People tend to hold religious views and political views that are similar to the views of their parents. Many of these people probably aren’t trying to have accurate views. And the situation would be much better if people gave more weight to the aggregated opinion of a broader coalition of perspectives.
I think a different problem arises in the LessWrong and effective altruism communities. In this case, people are much more reflectively choosing which sets of people to get their beliefs from, and I believe they are getting beliefs from some pretty good people. However, taking an outside perspective, it seems overwhelmingly likely that these communities are subject to their own biases and blind spots, and the people who are most attracted to these communities are most likely to suffer from the same biases and blind spots. I suspect elite common sense would take these communities more seriously than it currently does if it had access to more information about the communities, but I don’t think it would take us sufficiently seriously to justify having high confidence in many of our more unusual views.
Being overconfident on open questions where we don’t have a lot of evidence to work with: In my experience, it is common to give little weight to common sense takes on questions about which there is no generally accepted answer, even when it is impossible to use commonsense reasoning to arrive at conclusions that get broad support. Some less sophisticated people seem to see this as a license to think whatever they want, as Paul Graham has commented in the case of politics and religion. I meet many more sophisticated people with unusual views about big picture philosophical, political, and economic questions in areas where they have very limited inside information and very limited information about the distribution of expert opinion. For example, I have now met a reasonably large number of non-experts who have very confident, detailed, unusual opinions about meta-ethics, libertarianism, and optimal methods of taxation. When I challenge people about this, I usually get some version of “people are not good at thinking about this question” but rarely a detailed explanation of why this person in particular is an exception to this generalization (more on this problem below).
There’s an inverse version of this problem where people try to “suspend judgment” on questions where they don’t have high-quality evidence, but actually end up taking very unusual stances without adequate justification. For example, I sometimes talk with people who say that improving the very long-term future would be overwhelmingly important if we could do it, but are skeptical about whether we can. In response, I sometimes run arguments of the form:
- In expectation, it is possible to improve broad feature X of the world (education, governance quality, effectiveness of the scientific community, economic prosperity).
- If we improve feature X, it will help future people deal with various big challenges and opportunities better in expectation.
- If people deal with these challenges and opportunities better in expectation, the future will be better in expectation.
- Therefore, it is possible to make the future better in expectation.
I’ve presented some preliminary thoughts on related issues here. Some people try to resist this argument on grounds of general skepticism about attempts at improving the world that haven’t been documented with high-quality evidence. Peter Hurford’s post on “speculative causes” is the closest example that I can point to online, though I’m not sure whether he still disagrees with me on this point. I believe that there can be some adjustment in the direction of skepticism in light of arguments that GiveWell has articulated here under “we are relatively skeptical,” but I consider rejecting the second premise on these grounds a significant departure from elite common sense. I would have a similar view about anyone who rejected any of the other premises—at least if they rejected them for all values of X—for such reasons. It’s not that I think the presumption in favor of elite common sense can’t be overcome—I strongly favor thinking about such questions more carefully and am open to changing my mind—it’s just that I don’t think it can be overcome by these types of skeptical considerations. Why not? These types of considerations seem like they could make the probability distribution over impact on the very long-term narrower, but I don’t see how they could put it tightly around zero. And in any case, GiveWell articulates other considerations in that post and other posts which point in favor of less skepticism about the second premise.
Part of the issue may be confusion about “rejecting” a premise and “suspending judgment.” In my view, the question is “What are the expected long-term effects of improving factor X?” You can try not to think about this question or say “I don’t know,” but when you make decisions you are implicitly committed to certain ranges of expected values on these questions. To justifiably ignore very long-term considerations, I think you probably need your implicit range to be close to zero. I often see people who say they are “suspending judgment” about these issues or who say they “don’t know” acting as if this ranger were very close to zero. I see this as a very strong, precise claim which is contrary to elite common sense, rather than an open-minded, “we’ll wait until the evidence comes in” type of view to have. Another way to put it is that my claim that improving some broad factor X has good long-run consequences is much more of an anti-prediction than the claim that its expected effects are close to zero. (Independent point: I think that a more compelling argument than the argument that we can’t affect the far future is the argument that that lots of ordinary actions have flow-through effects with astronomical expected impacts if anything does, so that people aiming explicitly at reducing astronomical waste are less privileged than one might think at first glance. I hope to write more about this issue in the future.)
Putting too much weight on your own opinions because you have better arguments on topics that interest you than other people, or the people you typically talk to: As mentioned above, I believe that some smart people, especially smart people who rely a lot on explicit reasoning, can become very good at developing strong arguments for their opinions without being very good at finding true beliefs. I think that in such instances, these people will generally not be very successful at getting a broad coalition of impressive people to accept their views (except perhaps by relying on non-rational methods of persuasion). Stress-testing your views by trying to actually convince others of your opinions, rather than just out-arguing them, can help you avoid this trap.
Putting too much weight on the opinions of single individuals who seem trustworthy to you personally but not to people in general, and have very unusual views: I have seen some people update significantly in favor of very unusual philosophical, scientific, and sociological claims when they encounter very intelligent advocates of these views. These people are often familiar with Aumann’s agreement theorem and arguments for splitting the difference with epistemic peers, and they are rightly troubled by the fact that someone fairly similar to them disagrees with them on an issue, so they try to correct for their own potential failures of rationality by giving additional weight to the advocates of these very unusual views.
However, I believe that taking disagreement seriously favors giving these very unusual views less weight, not more. The problem partly arises because philosophical discussion of disagreement often focuses on the simple case of two people sharing their evidence and opinions with each other. But what’s more relevant is the distribution of quality-weighted opinion around the world in general, not the distribution of quality-weighted opinion of the people that you have had discussions with, and not the distribution of quality-weighted opinion of the people that seem trustworthy to you personally. The epistemically modest move here is to try to stay closer to elite common sense, not to split the difference.
Objections to this approach
Objection: elite common sense is often wrong
One objection I often hear is that elite common sense is often wrong. I believe this is true, but not a problem for my framework. I make the comparative claim that elite common sense is more trustworthy than the idiosyncratic standards of the vast majority of individual people, not the claim that elite common sense is almost always right. A further consideration is that analogous objections to analogous views fail. For instance, “markets are often wrong in their valuation of assets” is not a good objection to the efficient markets hypothesis. As explained above, the argument that “markets are often wrong” needs to point to specific way in which one can do better than the market in order for it to make sense to place less weight on what the market says than on one’s own judgments.
Objection: the best people are highly unconventional
Another objection I sometimes hear is that the most successful people often pay the least attention to conventional wisdom. I think this is true, but not a problem for my framework. One reason I believe this is that, according to my framework, when you go against elite common sense, what matters is whether elite common sense reasoning standards would justify your opinion if someone following those standards knew about your background, information, and analysis. Though I can’t prove it, I suspect that the most successful people are often depart from elite common sense in ways that elite common sense would endorse if it had access to more information. I also believe that the most successful people tend to pay attention to elite common sense in many areas, and specifically bet against elite common sense in areas where they are most likely to be right.
A second consideration is that going against elite common sense may be a high-risk strategy, so that it is unsurprising if we see the most successful people pursuing it. People who give less weight to elite common sense are more likely to spend their time on pointless activities, join cults, and become crackpots, though they are also more likely to have revolutionary positive impacts. Consider an analogy: it may be that the gamblers who earned the most used the riskiest strategies, but this is not good evidence that you should use a risky strategy when gambling because the people who lost the most also played risky strategies.
A third consideration is that while it may be unreasonable to be too much of an independent thinker in a particular case, being an independent thinker helps you develop good epistemic habits. I think this point has a lot of merit, and could help explain why independent thinking is more common among the most successful people. This might seem like a good reason not to pay much attention to elite common sense. However, it seems to me that you can get the best of both worlds by being an independent thinker and keeping separate track of your own impressions and what elite common sense would make of your evidence. Where conflicts come up, you can try to use elite common sense to guide your decisions.
I feel my view is weakest in cases where there is a strong upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit. Perhaps many crazy-sounding entrepreneurial ideas and scientific hypotheses fit this description. I believe it may make sense to pick a relatively small number of these to bet on, even in cases where you can’t convince elite common sense that you are on the right track. But I also believe that in cases where you really do have a great but unconventional idea, it will be possible to convince a reasonable chunk of elite common sense that your idea is worth trying out.
Objection: elite common sense is wrong about X, and can’t be talked out of it, so your framework should be rejected in general
Another common objection takes the form: view X is true, but X is not a view which elite common sense would give much weight to. Eliezer makes a related argument here, though he is addressing a different kind of deference to common sense. He points to religious beliefs, beliefs about diet, and the rejection of cryonics as evidence that you shouldn’t just follow what the majority believes. My position is closer to “follow the majority’s epistemic standards” than “believe what the majority beliefs,” and closer still to “follow the best people’s epistemic standards without cherry picking “best” to suit your biases,” but objections of this form could have some force against the framework I have defended.
A first response is that unless one thinks there are many values of X in different areas where my framework fails, providing a few counterexamples is not very strong evidence that the framework isn’t helpful in many cases. This is a general issue in philosophy which I think is underappreciated, and I’ve made related arguments in chapter 2 of my dissertation. I think the most likely outcome of a careful version of this attack on my framework is that we identify some areas where the framework doesn’t apply or has to be qualified.
But let’s delve into the question about religion in greater detail. Yes, having some religious beliefs is generally more popular than being an atheist, and it would be hard to convince intelligent religious people to become atheists. However, my impression is that my framework does not recommend believing in God for the following reasons. Here are a number of weak arguments for this claim:
- My impression is that the people who are most trustworthy by clear and generally accepted standards are significantly more likely to be atheists than the general population. One illustration of my perspective is that in a 1998 survey of the National Academy of Sciences, only 7% of respondents reported that they believed in God. However, there is a flame war and people have pushed many arguments on this issue, and scientists are probably unrepresentative of many trustworthy people in this respect.
- While the world at large has broad agreement that some kind of higher power exists, there is very substantial disagreement about what this means, to the point where it isn’t clear that these people are talking about the same thing.
- In my experience, people generally do not try very hard to have accurate beliefs about religious questions and have little patience for people who want to carefully discuss arguments about religious questions at length. This makes it hard to stress-test one’s views about religion by trying to get a broad coalition of impressive people to accept atheism, and makes it possible to give more weight to one’s personal take if one has thought unusually carefully about religious questions.
- People are generally raised in religious families, and there are substantial social incentives to remain religious. Social incentives for atheists to remain non-religious generally seem weaker, though they can also be substantial. For example, given my current social network, I believe I would pay a significant cost if I wanted to become religious.
- Despite the above point, in my experience, it is much more common for religious people to become atheists than it is for atheists to become religious.
- In my experience, among people who try very hard to have accurate beliefs about whether God exists, atheism is significantly more common than belief in God.
- In my experience, the most impressive people who are religious tend not to behave much differently from atheists or have different takes on scientific questions/questions about the future.
These points rely a lot on my personal experience, could stand to be researched more carefully, and feel uncomfortably close to lousy contrarian excuses, but I think they are nevertheless suggestive. In light of these points, I think my framework recommends that the vast majority of people with religious beliefs should be substantially less confident in their views, recommends modesty for atheists who haven’t tried very hard to be right, and I suspect it allows reasonably high confidence that God doesn’t exist for people who have strong indicators that they have thought carefully about the issue. I think it would be better if I saw a clear and principled way for the framework to push more strongly in the direction of atheism, but the case has enough unusual features that I don’t see this as a major argument against the general helpfulness of the framework.
As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
Conclusion
I’ve outlined a framework for taking account of the distribution of opinions and epistemic standards in the world and discussed some of its strengths and weaknesses. I think the largest strengths of the framework are that it can help you avoid falling prey to idiosyncratic personal biases, and that using it derives benefits from the “wisdom of crowds” effects. The framework is less helpful in:
- cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and
- cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.
Some questions for people who want to further develop the framework include:
- How sensitive is the framework to other reasonable choices of standards for selecting trustworthy people? Are there more helpful standards to use?
- How sensitive is the framework to reasonable choices of standards for aggregating opinions of trustworthy people?
- What are the best ways of getting a better grip on elite common sense?
- What other areas are there where the framework is particularly weak or particularly strong?
- Can the framework be developed in ways that make it more helpful in cases where it is weakest?
My understanding is that the interpretation of QM is (1) not regarded as a very central question in physics, being seen more as a "philosophy" question and being worked on to a reasonable extent by philosophers of physics and physicists who see it as a hobby horse, (2) is not something that physics expertise--having good physical intuition, strong math skills, detailed knowledge of how to apply QM on concrete problems--is as relevant for as many other questions physicists work on, and (3) is not something about which there is an extremely enormous amount to say. These are some of the main reasons I feel I can update at all from the expert distribution of physicists on this question. I would hardly update at all from physicist opinions on, say, quantum gravity vs. string theory, and I think it would basically be crazy for me to update substantially in one direction or the other if I had comparable experience on that question.
[ETA: As evidence of (1), I might point to the prevalence of the "shut up and calculate" mentality which seems to have been reasonably popular in physics for a while. I'd also point to the fact that Copenhagen is popular but really, really, really, really not good. And I feel that this last claim is not just Nick Beckstead's idiosyncratic opinion, but the opinion of every philosopher of physics I have ever spoken with about this issue.]
A minor quibble.
I believe you are using bad terminology. 'Quantum gravity' refers to any attempt to reconcile quantum mechanics and general relativity, and string theory is one such theory (as well as a theory of everything). Perhaps you are referring to loop quantum gravity, or more broadly, to any theory of quantum gravity other than string theory?