One problem with this is that you often can't access the actual epistemic standards of other people because they have no incentives to reveal them to you. Consider the case of the Blu-ray copy protection system BD+ (which is fresh in my mind because I just used it recently as a example elsewhere). I'm not personally involved with this case, but my understanding based on what I've read is that the Blu-ray consortium bought the rights to the system from a reputable cryptography consulting firm for several million dollars (presumably after checking with other independent consultants), and many studios choose Blu-ray over HD DVD because of it. (From Wikipedia: Several studios cited Blu-ray Disc's adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take "10 years" to crack, according to Richard Doherty, an analyst with Envisioneering Group.) And yet one month after Blu-ray discs were released using the system, it was broken and those discs became copyable to people having a commercially available piece of software.
I think the actual majority opinion in the professional cryptography community, when the...
(Upvoted.) I have to say that I'm a lot more comfortable with the notion of elite common sense as a prior which can then be updated, a point of departure rather than an eternal edict; but it seems to me that much of the post is instead speaking of elite common sense as a non-defeasible posterior. (E.g. near the start, comparing it to philosophical majoritarianism.)
It also seems to me that much of the text has the flavor of what we would in computer programming call the B&D-nature, an attempt to impose strict constraints that prevent bad programs from being written, when there is not and may never be a programming language in which it is the least bit difficult to write bad programs, and all you can do is offer tools to people that (switching back to epistemology) make it easier for them to find the truth if they wish to do so, and make it clearer to them when they are shooting off their own foot. I remark, inevitably, that when it comes to discussing the case of God, you very properly - as I deem it proper - list off a set of perfectly good reasons to violate the B&D-constraints of your system. And this would actually make a deal more sense if we were taking elite opini...
Just FYI, I think Eliezer's mental response to most of the questions/responses you raised here will be "I spent 40+ pages addressing these issues in my QM sequence. I don't have time to repeat those points all over again."
It might indeed be worth your time to read the QM sequence, so that you can have one detailed, thoroughly examined example of how one can acquire high confidence that the plurality of scientific elites (even in a high-quality field like physics) are just wrong. Or, if you read the sequence and come to a different conclusion, that would also be interesting.
Even if I read the QM sequence and find the arguments compelling, I still wouldn't feel as though I had enough subject matter expertise to rationally disagree with elite physicists with high confidence. I don't think that I'm more rational than Bohr, Heisenberg, Dirac, Feynman, Penrose, Schrödinger and Wigner. These people thought about quantum mechanics for decades. I wouldn't be able to catch up in a week. The probability that I'd be missing fundamental and relevant things that they knew would dominate my prior.
I'll think about reading the QM sequence.
I don't think that I'm more rational than Bohr, Heisenberg, Dirac, Feynman, Penrose, Schrödinger and Wigner. These people thought about quantum mechanics for decades. I wouldn't be able to catch up in a week.
We should separate "rationality" from "domain knowledge of quantum physics."
Certainly, each of these figures had greater domain knowledge of quantum physics than I plan to acquire in my life. But that doesn't mean I'm powerless to (in some cases) tell when they're just wrong. The same goes for those with immense domain knowledge of – to take an unusually clear-cut example – Muslim theology.
Consider a case where you talk to a top Muslim philosopher and extract his top 10 reasons for believing in Allah. His arguments have obvious crippling failures, so you try to steel man them, and you check whether you're just misinterpreting the arguments, but in the end they're just... terrible arguments. And then you check with lots of other leading Muslim philosophers and say "Give me your best reasons" and you just get nothing good. At that point, I think you're in a pretty good position to reject the Muslim belief in Allah, rather than saying "Well, ...
I feel the main disanalogy with the Muslim theology case is that elite common sense does not regard top Muslim philosophers as having much comparative expertise on the question of whether Allah exists, but they would regard top physicists as having very strong comparative expertise on the interpretation of quantum mechanics. By this I mean that elite common sense would generally substantially defer to the opinions of the physicists but not the Muslim philosophers. This disanalogy is sufficiently important to me that I find the overall argument by analogy highly non-compelling.
I note that there are some meaningful respects in which elite common sense would regard the Muslim philosophers as epistemic authorities. They would recognize their authority as people who know about what the famous arguments for Allah's existence and nature are, what the famous objections and replies are, and what has been said about intricacies of related metaphysical questions, for example.
The intrinsic interest of the question of interpretation of quantum mechanics
The question of what quantum mechanics means has been considered one of the universe’s great mysteries. As such, people interested in physics have been highly motivated to understand it. So I think that the question is privileged relative to other questions that physicists would have opinions on — it’s not an arbitrary question outside of the domain of their research accomplishments.
My understanding is that the interpretation of QM is (1) not regarded as a very central question in physics, being seen more as a "philosophy" question and being worked on to a reasonable extent by philosophers of physics and physicists who see it as a hobby horse, (2) is not something that physics expertise--having good physical intuition, strong math skills, detailed knowledge of how to apply QM on concrete problems--is as relevant for as many other questions physicists work on, and (3) is not something about which there is an extremely enormous amount to say. These are some of the main reasons I feel I can update at all from the expert distribution of physicists on this question. I would hardly update at all fro...
A minor quibble.
quantum gravity vs. string theory
I believe you are using bad terminology. 'Quantum gravity' refers to any attempt to reconcile quantum mechanics and general relativity, and string theory is one such theory (as well as a theory of everything). Perhaps you are referring to loop quantum gravity, or more broadly, to any theory of quantum gravity other than string theory?
Let me first say that I find this to be an extremely interesting discussion.
In almost all domains, I think that the highest intellectual caliber people have no more than 5x my intellectual caliber. Physics is different. From what I’ve heard, the distribution of talent in physics is similar to that of math. The best mathematicians are 100x+ my intellectual caliber.
I think there is a social norm in mathematics and physics that requires people to say this, but I have serious doubts about whether it is true. Anyone 100x+ your intellectual caliber should be having much, much more impact on the world (to say nothing of mathematics itself) than any of the best mathematicians seem to be having. At the very least, if there really are people of that cognitive level running around, then the rest of the world is doing an absolutely terrible job of extracting information and value from them, and they themselves must not care too much about this fact.
More plausible to me is the hypothesis that the best mathematicians are within the same 5x limit as everyone else, and that you overestimate the difficulty of performing at their level due to cultural factors which discourage systematic study of...
Try this thought experiment: suppose you were a graduate student in mathematics, and went to your advisor and said: "I'd like to solve [Famous Problem X], and to start, I'm going to spend two years closely examining the work of Newton, Gauss, and Wiles, and their contemporaries, to try to discern at a higher level of generality what the cognitive stumbling blocks to solving previous problems were, and how they overcame them, and distill these meta-level insights into a meta-level technique of my own which I'll then apply to [Famous Problem X]."
This is a terrible idea unless they're spending half their time pushing their limits on object-level math problems. I just don't think it works to try to do a meta phase before an object phase unless the process is very, very well-understood and tested already.
(I'll also note that it's somewhat odd to hear this response from someone whose entire mission in life is essentially to go meta on all of humanity's problems...)
That's not the kind of meta I mean. The dangerous form of meta is when you spend several years preparing to do X, supposedly becoming better at doing X, but not actually doing X, and then try to do X. E.g. college. Trying to improve at doing X while doing X is much, much wiser. I would similarly advise Effective Altruists who are not literally broke to be donating $10 every three months to something while they are trying to increase their incomes and invest in human capital; furthermore, they should not donate to the same thing two seasons in a row, so that they are also practicing the skill of repeatedly assessing which charity is most important.
"Meta" for these purposes is any daily activity which is unlike the daily activity you intend to do 'later'.
Tight feedback loops are good, but not always available. This is a separate consideration from doing meta while doing object.
The activity of understanding someone else's proofs may be unlike the activity of producing your own new math from scratch; this would be the problem.
The extraordinary intellectual caliber of the best physicists
That is of course exactly why I picked QM and MWI to make my case for nihil supernum. It wouldn't serve to break a smart person's trust in a sane world if I demonstrated the insanity of Muslim theologians or politicians; they would just say, "But surely we should still trust in elite physicists." It is by demonstrating that trust in a sane world fails even at the strongest point which 'elite common sense' would expect to find, that I would hope to actually break someone's emotional trust, and cause them to just give up.
I haven't fully put together my thoughts on this, but it seems like a bad test to "break someone's trust in a sane world" for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn't an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn't transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn't really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren't good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
[Edited to add a missing "not"]
From my perspective, the main point is that if you'd expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently - the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn't really matter much to me, since we're already way under the 'pass' threshold for FAI.
This of course is exactly what Muslim theologians would say about Muslim theology. And I'm perfectly happy to say, "Well, the physicists are right and Muslim theologians are wrong", but that's because I'm relying on my own judgment thereon.
The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence - to break their trust in a sane world, before which nothing can begin.
Does it worry you that people with good domain knowledge of physics(Shminux,Mitchell Porter, myself) seem to feel that your QM sequence is actually presenting a misleading picture of why some elite physicists don't hold to many worlds with high probability?
Also, is it desirable to train rationalists to believe that they SHOULD update their belief about interpretations of quantum mechanics above a weighted sampling of domain experts based on ~50 pages of highschool level physics exposition? I would hope anyone whose sole knowledge of quantum mechanics is the sequence puts HUGE uncertainty bands around any estimate of the proper interpretation of quantum mechanics, because there is so much they don't know (and even more they don't know that they don't know)
The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence
-1 for unjustified arrogance. The QM sequence has a number of excellent teaching points, like the discussion of how we can be sure that all electrons are identical, but the idea that one can disagree with experts in a subject matter without first studying the subject matter in depth is probably the most irrational, contagious and damaging idea in all of the sequences.
Given that there's some serious fundamental physics we don't understand yet, I find it hard to persuade myself there's less than a 1% chance that even the framing of single-world versus many-world interpretations is incoherent.
I haven't seen any of these interpretation polls with a good random sample, as opposed to niche meetings.
One of the commenters below the Carroll blog post you linked suggests that poll was from a meeting organized by a Copenhagen proponent:
I think that one of the main things I learned from this poll is that if you conduct a poll at a conference organized by Zeilinger then Copenhagen will come out top, whereas if you conduct a poll at a conference organized by Tegmark then many worlds will come out top. Is this a surprise to anyone?
The Tegmark "Everett@50" (even more obvious bias there, but this one allowed a "none of the above/undecided" option which was very popular) conference results are discussed in this paper:
Which interpretation of quantum mechanics is closest to your own?
2 Copenhagen or consistent histories (including postulate of explicit collapse)
5 Modified dynamics (Schrdinger equation modified to give explicit collapse)
19 Many worlds/consistent histories (no collapse)
2 Bohm
1.5 Modal
22.5 None of the above/undecided
- Do you feel comfortable saying that Everettian parallel uni- verses are as real as our universe? (14 Yes/26 No/8 Undecided)
A 1997...
Quick remarks (I may or may not be able to say more later).
If your system allows you to update to 85% in favor of Many-Worlds based on moderate familiarity with the arguments, then I think I'm essentially okay with what you're actually doing. I'm not sure I'm okay with what the OP advocates doing, but I'm okay with what you just did there.
Data point: I have emailed the top ~10 researchers of 3 different fields in which I was completely naive at the time (social psychology, computational social simulation, neuroscience of morality) - giving a ~ 30 total -, and they all tend to engage my questions, with a subsequent e-mail conversation of 5 to 30 emails. I had no idea how easy it was to make a top researcher engage in a conversation with a naive person. Knowing this made me much more prone to apply the prescription of this post - one I am in agreement. (I understand that what this post prescribes is far more complex than actually engaging in a conversation with the top researchers of an area.)
Formatting issue: for long posts to the main page, it's nice to have a cut after the introduction.
It's nice to have this down in linkable format.
...As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their
Great post, Nick! I agree with most of what you say, although there are times when I don't always demonstrate this in practice. Your post is what I would consider a good "motivational speech" -- an eloquent defense of something you agree with but could use reminding of on occasion.
It's good to get outside one's intellectual bubble, even a bubble as fascinating and sophisticated as LessWrong. Even on the seemingly most obvious of questions, we could be making logical mistakes.
I think the focus on only intellectual elites has unclear grounding. Is ...
Great article! A couple of questions:
(1) Can you justify picking 'the top 10% of people who got Ivy-League-equivalent educations' an an appropriate elite a little more? How will the elite vary (in size and in nature) for particular subjects?
(2) Can you (or others) give more examples of the application of this method to particular questions? Personally, I'm especially interested in case where it'd depart from decision-relevant views held by a substantial fraction of effective altruists.
I feel my view is weakest in cases where there is a strong upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit. Perhaps many crazy-sounding entrepreneurial ideas and scientific hypotheses fit this description.
It doesn't look too hard to fold these scenarios into the framework. Elite common sense tends to update in light of valid novel arguments, but this updating is slow. So if you're in possession of a novel argument you often don't...
I think the problem with elite common sense is that it can be hard to determine whose views count. Take for instance the observed behavior of the investing public determined by where they put their assets in contrast with the views of most economists. If you define the elite as the economists, you get a different answer from the elite as people with money. (The biggest mutual funds are not the lowest cost ones, though this gap has narrowed over time, and economists generally prefer lower cost funds very strongly)
Upvoted for clarity, but fantastically wrong, IMHO. In particular, "I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. " seems to me to be unmotivated by epistemology and visibly motivated by conformity.
The overall framework is sensible, but I have trouble applying it to the most vexing cases: where the respected elites mostly just giggle at a claim and seem to refuse to even think about reasons for or against it, but instead just confidently reject it. It might seem to me that their usual intellectual standards would require that they engage in such reasoning, but the fact that they do not in fact think that appropriate in this case is evidence of something. But what?
Can you expand a little on how you would "try to find out what elite common sense would make of [your] information and analysis"? Is the following a good example of how to do it?
By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
That's one clever trick right there!
Some empirical discussion of this issue can be found in Hochschild (2012) and the book it discusses, Zaller (1992).
How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?
On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.
Introduction
[I have edited the introduction of this post for increased clarity.]
This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:
Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.
The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.
I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.
I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.
In the rest of this post I will:
An outline of the framework and some guidelines for applying it effectively
My suggestion is to use elite common sense as a prior rather than the standards of reasoning that come most naturally to you personally. The three main steps for doing this are:
On the first step, people often have an instinctive sense of what others think, though you should beware the false consensus effect. If you don’t know what other opinions are out there, you can ask some friends or search the internet. In my experience, regular people often have similar opinions to very smart people on many issues, but are much worse at articulating considerations for and against their views. This may be because many people copy the opinions of the most trustworthy people.
I favor giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally. This guideline is intended to help avoid parochialism and increase self-skepticism. Individual people have a variety of biases and blind spots that are hard for them to recognize. Some of these biases and blind spots—like the ones studied in cognitive science—may affect almost everyone, but others are idiosyncratic—like biases and blind spots we inherit from our families, friends, business networks, schools, political groups, and religious communities. It is plausible that combining independent perspectives can help idiosyncratic errors wash out.
In order for the errors to wash out, it is important to rely on the standards of people who are trustworthy by clear indicators that many people would accept rather than the standards of people that seem trustworthy to you personally. Why? The people who seem most impressive to us personally are often people who have similar strengths and weaknesses to ourselves, and similar biases and blind spots. For example, I suspect that academics and people who specialize in using a lot of explicit reasoning have a different set of strengths and weaknesses from people who rely more on implicit reasoning, and people who rely primarily on many weak arguments have a different set of strengths and weaknesses from people who rely more on one relatively strong line of argument.
Some good indicators of general trustworthiness might include: IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions. I am less committed to any particular list of indicators than the general idea.
Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy). Or there may be no putative experts on a question. In cases where elite common sense gives less weight to the opinions of putative experts or there are no plausible candidates for expertise, it becomes more relevant to think about what elite common sense would say about a question.
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs. If I only included, say, the 20 smartest people I had ever met as judged by me personally, that would probably be too small a number of people, the people would probably have biases and blind spots very similar to mine, and I would miss out on some of the most trustworthy people, but it would be a pretty trustworthy collection of people and I’d have some reasonable sense of what they would say about various issues. If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
I can’t give any very precise answer to the question about whose opinions should be given significant weight, even in my own case. Luckily, I think the output of this framework is usually not very sensitive to how we answer this question, partly because most people would typically defer to other, more trustworthy people. If you want a rough guideline that I think many people who read this post could apply, I would recommend focusing on, say, the opinions of the top 10% of people who got Ivy-League-equivalent educations (note that I didn’t get such an education, at least as an undergrad, though I think you should give weight to my opinion; I’m just giving a rough guideline that I think works reasonably well in practice). You might give some additional weight to more accomplished people in cases where you have a grip on how they think.
I don’t have a settled opinion about how to aggregate the opinions of elite common sense. I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. For the purpose of making decisions, I think that sophisticated voting methods (such as the Condorcet method) and analogues of the parliamentary approaches outlined by Nick Bostrom and Toby Ord seem fairly promising as rough guidelines in the short run. I don’t do calculations with this framework—as I said, it’s mostly conceptual—so uncertainty about an aggregation procedure hasn’t been a major issue for me.
On the margin, I favor paying more attention to people’s opinions than their explicitly stated reasons for their opinions. Why? One reason is that I believe people can have highly adaptive opinions and patterns of reasoning without being able to articulate good defenses of those opinions and/or patterns of reasoning. (Luke Muehlhauser has discussed some related points here.) One reason is that people can adopt practices that are successful without knowing why they are successful, others who interact with them can adopt those practices, others who interact with them can adopt those practices, and so forth. I heard an extreme example of this from Spencer Greenberg, who had read it in Scientists Greater than Einstein. The story involved a folk remedy for visual impairment:
There were folk remedies worthy of study as well. One widely used in Java on children with either night blindness or Bitot’s spots consisted of dropping the juices of lightly roasted lamb’s liver into the eyes of affected children. Sommer relates, “We were bemused at the appropriateness of this technique and wondered how it could possibly be effective. We, therefore, attended several treatment sessions, which were conducted exactly as the villagers had described, except for one small addition—rather than discarding the remaining organ, they fed it to the affected child. For some unknown reason this was never considered part of the therapy itself.” Sommer and his associates were bemused, but now understood why the folk remedy had persisted through the centuries. Liver, being the organ where vitamin A is stored in a lamb or any other animal, is the best food to eat to obtain vitamin A. (p. 14)
Another striking example is bedtime prayer. In many Christian traditions I am aware of, it is common to pray before going to sleep. And in the tradition I was raised in, the main components of prayer were listing things you were grateful for, asking for forgiveness for all the mistakes you made that day and thinking about what you would do to avoid similar mistakes in the future, and asking God for things. Christians might say the point of this is that it is a duty to God, that repentance is a requirement for entry to heaven, or that asking God for things makes God more likely to intervene and create miracles. However, I think these activities are reasonable for different reasons: gratitude journals are great, reflecting on mistakes is a great way to learn and overcome weaknesses, and it is a good idea to get clear about what you really want out of life in the short-term and the long-term.
Another reason I have this view is that if someone has an effective but different intellectual style from you, it’s possible that your biases and blind spots will prevent you from appreciating their points that have significant merit. If you partly give weight to opinions independently of how good the arguments seem to you personally, this can be less of an issue for you. Jonah Sinick described a striking reason this might happen in Many Weak Arguments and the Typical Mind:
We should pay more attention to people’s bottom line than to their stated reasons — If most high functioning people aren’t relying heavily on any one of the arguments that they give, if a typical high functioning person responds to a query of the type “Why do you think X?” by saying “I believe X because of argument Y” we shouldn’t conclude that the person believes argument Y with high probability. Rather, we should assume that argument Y is one of many arguments that they believe with low confidence, most of which they’re not expressing, and we should focus on their belief in X instead of argument Y. [emphasis his]
This idea interacts in a complementary way to Luke Muehlhauser’s claim that some people who are not skilled at explicit rationality may be skilled in tacit rationality, allowing them to be successful at making many types of important decisions. If we are interacting with such people, we should give significant weight to their opinions independently of their stated reasons.
A counterpoint to my claim that, on the margin, we should give more weight to others’ conclusions and less to their reasoning is that some very impressive people disagree. For example, Ray Dalio is the founder of Bridgewater, which, at least as of 2011, was the world’s largest hedge fund. He explicitly disagrees with my claim:
“I stress-tested my opinions by having the smartest people I could find challenge them so I could find out where I was wrong. I never cared much about others’ conclusions—only for the reasoning that led to these conclusions. That reasoning had to make sense to me. Through this process, I improved my chances of being right, and I learned a lot from a lot of great people.” (p. 7 of Principles by Ray Dalio)
I suspect that getting the reasoning to make sense to him was important because it helped him to get better in touch with elite common sense, and also because reasoning is more important when dealing with very formidable people, as I suspect Dalio did and does. I also think that for the some of the highest functioning people who are most in touch with elite common sense, it may make more sense to give more weight to reasoning than conclusions.
The elite common sense framework favors testing unconventional views by seeing if you can convince a broad coalition of impressive people that your views are true. If you can do this, it is often good evidence that your views are supported by elite common sense standards. If you can’t, it’s often good evidence that your views can’t be so supported. Obviously, these are rules of thumb and we should restrict our attention to cases where you are persuading people by rational means, in contrast with using rhetorical techniques that exploit human biases. There are also some interesting cases where, for one reason or another, people are unwilling to hear your case or think about your case rationally, and applying this guideline to these cases is tricky.
Importantly, I don’t think cases where elite common sense is biased are typically an exception to this rule. In my experience, I have very little difficulty convincing people that some genuine bias, such as scope insensitivity, really is biasing their judgment. And if the bias really is critical to the disagreement, I think it will be a case where you can convince elite common sense of your position. Other cases, such as deeply entrenched religious and political views, may be more of an exception, and I will discuss the case of religious views more in a later section.
The distinction between convincing and “beating in an argument” is important for applying this principle. It is much easier to tell whether you convinced someone than it is to tell whether you beat them in an argument. Often, both parties think they won. In addition, sometimes it is rational not to update much in favor of a view if an advocate for that view beats you in an argument.
In support of this claim, consider what would happen if the world’s smartest creationist debated some fairly ordinary evolution-believing high school student. The student would be destroyed in argument, but the student should not reject evolution, and I suspect he should hardly update at all. Why not? The student should know that there are people out there in the world who could destroy him on either side of this argument, and his personal ability to respond to arguments is not very relevant. What should be most relevant to this student is the distribution of opinion among people who are most trustworthy, not his personal response to small sample of the available evidence. Even if you genuinely are beating people in arguments, there is a risk that you will be like this creationist debater.
An additional consideration is that certain beliefs and practices may be reasonable and adopted for reasons that are not accessible to people who have adopted those beliefs and practices, as illustrated with the examples of the liver ritual and bedtime prayer. You might be able to “beat” some Christian in an argument about the merits of bedtime prayer, but praying may still be better than not praying. (I think it would be better still to introduce a different routine that serves similar functions—this is something I have done in my own life—but the Christian may be doing better than you on this issue if you don’t have a replacement routine yourself.)
Under the elite common sense framework, the question is not “how reliable is elite common sense?” but “how reliable is elite common sense compared to me?” Suppose I learn that, actually, people are much worse at pricing derivatives than I previously believed. For the sake of argument suppose this was a lesson of the 2008 financial crisis (for the purposes of this argument, it doesn't matter whether this is actually a correct lesson of the crisis). This information does not favor relying more on my own judgment unless I have reason to think that the bias applies less to me than the rest of the derivatives market. By analogy, it is not acceptable to say, “People are really bad at thinking about philosophy. So I am going to give less weight to their judgments about philosophy (psst…and more weight to my personal hunches and the hunches of people I personally find impressive).” This is only OK if you have evidence that your personal hunches and the hunches of the people you personally find impressive are better than elite common sense, with respect to philosophy. In contrast, it might be acceptable to say, “People are very bad at thinking about the consequences of agricultural subsidies in comparison with economists, and most trustworthy people would agree with this if they had my evidence. And I have an unusual amount of information about what economists think. So my opinion gets more weight than elite common sense in this case.” Whether this ultimately is acceptable to say would depend on how good elites are at thinking about the consequences of agricultural subsidies—I suspect they are actually pretty good at it—but this is isn’t relevant to the general point that I’m making. The general point is that this is one potentially correct form of an argument that your opinion is better than the current stance of elite common sense.
This is partly a semantic issue, but I count the above example as a case where “you are more reliable than elite common sense,” even though, in some sense, you are relying on expert opinion rather than your own. But you have different beliefs about who is a relevant expert or what experts say than common sense does, and in this sense you are relying on your own opinion.
I favor giving more weight to common sense judgments in cases where people are trying to have accurate views. For example, I think people don’t try very hard to have correct political, religious, and philosophical views, but they do try to have correct views about how to do their job properly, how to keep their families happy, and how to impress their friends. In general, I expect people to try to have more accurate views in cases where it is in their present interests to have more accurate views. (A quick reference for this point is here.) This means that I expect them to strive more for accuracy in decision-relevant cases, cases where the cost of being wrong is high, and cases where striving for more accuracy can be expected to yield more accuracy, though not necessarily in cases where the risks and rewards are won’t come for a very long time. I suspect this is part of what explains why people can be skilled in tacit rationality but not explicit rationality.
As I said above, what’s critical is not how reliable elite common sense is but how reliable you are in comparison with elite common sense. So it only makes sense to give more weight to your views when learning that others aren’t trying to be correct if you have compelling evidence that you are trying to be correct. Ideally, this evidence would be compelling to a broad class of trustworthy people and not just compelling to you personally.
Some further reasons to think that the framework is likely to be helpful
In explaining the framework and outlining guidelines for applying it, I have given some reasons to expect this framework to be helpful. Here are some more weak arguments in favor of my view:
Cases where people often don’t follow the framework but I think they should
I have seen a variety of cases where I believe people don’t follow the principles I advocate. There are certain types of errors that I think many ordinary people make and others that are more common for sophisticated people to make. Most of these boil down to giving too much weight to personal judgments, giving too much weight to people who are impressive to you personally but not impressive by clear and uncontroversial standards, or not putting enough weight on what elite common sense has to say.
Giving too much weight to the opinions of people like you: People tend to hold religious views and political views that are similar to the views of their parents. Many of these people probably aren’t trying to have accurate views. And the situation would be much better if people gave more weight to the aggregated opinion of a broader coalition of perspectives.
I think a different problem arises in the LessWrong and effective altruism communities. In this case, people are much more reflectively choosing which sets of people to get their beliefs from, and I believe they are getting beliefs from some pretty good people. However, taking an outside perspective, it seems overwhelmingly likely that these communities are subject to their own biases and blind spots, and the people who are most attracted to these communities are most likely to suffer from the same biases and blind spots. I suspect elite common sense would take these communities more seriously than it currently does if it had access to more information about the communities, but I don’t think it would take us sufficiently seriously to justify having high confidence in many of our more unusual views.
Being overconfident on open questions where we don’t have a lot of evidence to work with: In my experience, it is common to give little weight to common sense takes on questions about which there is no generally accepted answer, even when it is impossible to use commonsense reasoning to arrive at conclusions that get broad support. Some less sophisticated people seem to see this as a license to think whatever they want, as Paul Graham has commented in the case of politics and religion. I meet many more sophisticated people with unusual views about big picture philosophical, political, and economic questions in areas where they have very limited inside information and very limited information about the distribution of expert opinion. For example, I have now met a reasonably large number of non-experts who have very confident, detailed, unusual opinions about meta-ethics, libertarianism, and optimal methods of taxation. When I challenge people about this, I usually get some version of “people are not good at thinking about this question” but rarely a detailed explanation of why this person in particular is an exception to this generalization (more on this problem below).
There’s an inverse version of this problem where people try to “suspend judgment” on questions where they don’t have high-quality evidence, but actually end up taking very unusual stances without adequate justification. For example, I sometimes talk with people who say that improving the very long-term future would be overwhelmingly important if we could do it, but are skeptical about whether we can. In response, I sometimes run arguments of the form:
I’ve presented some preliminary thoughts on related issues here. Some people try to resist this argument on grounds of general skepticism about attempts at improving the world that haven’t been documented with high-quality evidence. Peter Hurford’s post on “speculative causes” is the closest example that I can point to online, though I’m not sure whether he still disagrees with me on this point. I believe that there can be some adjustment in the direction of skepticism in light of arguments that GiveWell has articulated here under “we are relatively skeptical,” but I consider rejecting the second premise on these grounds a significant departure from elite common sense. I would have a similar view about anyone who rejected any of the other premises—at least if they rejected them for all values of X—for such reasons. It’s not that I think the presumption in favor of elite common sense can’t be overcome—I strongly favor thinking about such questions more carefully and am open to changing my mind—it’s just that I don’t think it can be overcome by these types of skeptical considerations. Why not? These types of considerations seem like they could make the probability distribution over impact on the very long-term narrower, but I don’t see how they could put it tightly around zero. And in any case, GiveWell articulates other considerations in that post and other posts which point in favor of less skepticism about the second premise.
Part of the issue may be confusion about “rejecting” a premise and “suspending judgment.” In my view, the question is “What are the expected long-term effects of improving factor X?” You can try not to think about this question or say “I don’t know,” but when you make decisions you are implicitly committed to certain ranges of expected values on these questions. To justifiably ignore very long-term considerations, I think you probably need your implicit range to be close to zero. I often see people who say they are “suspending judgment” about these issues or who say they “don’t know” acting as if this ranger were very close to zero. I see this as a very strong, precise claim which is contrary to elite common sense, rather than an open-minded, “we’ll wait until the evidence comes in” type of view to have. Another way to put it is that my claim that improving some broad factor X has good long-run consequences is much more of an anti-prediction than the claim that its expected effects are close to zero. (Independent point: I think that a more compelling argument than the argument that we can’t affect the far future is the argument that that lots of ordinary actions have flow-through effects with astronomical expected impacts if anything does, so that people aiming explicitly at reducing astronomical waste are less privileged than one might think at first glance. I hope to write more about this issue in the future.)
Putting too much weight on your own opinions because you have better arguments on topics that interest you than other people, or the people you typically talk to: As mentioned above, I believe that some smart people, especially smart people who rely a lot on explicit reasoning, can become very good at developing strong arguments for their opinions without being very good at finding true beliefs. I think that in such instances, these people will generally not be very successful at getting a broad coalition of impressive people to accept their views (except perhaps by relying on non-rational methods of persuasion). Stress-testing your views by trying to actually convince others of your opinions, rather than just out-arguing them, can help you avoid this trap.
Putting too much weight on the opinions of single individuals who seem trustworthy to you personally but not to people in general, and have very unusual views: I have seen some people update significantly in favor of very unusual philosophical, scientific, and sociological claims when they encounter very intelligent advocates of these views. These people are often familiar with Aumann’s agreement theorem and arguments for splitting the difference with epistemic peers, and they are rightly troubled by the fact that someone fairly similar to them disagrees with them on an issue, so they try to correct for their own potential failures of rationality by giving additional weight to the advocates of these very unusual views.
However, I believe that taking disagreement seriously favors giving these very unusual views less weight, not more. The problem partly arises because philosophical discussion of disagreement often focuses on the simple case of two people sharing their evidence and opinions with each other. But what’s more relevant is the distribution of quality-weighted opinion around the world in general, not the distribution of quality-weighted opinion of the people that you have had discussions with, and not the distribution of quality-weighted opinion of the people that seem trustworthy to you personally. The epistemically modest move here is to try to stay closer to elite common sense, not to split the difference.
Objections to this approach
Objection: elite common sense is often wrong
One objection I often hear is that elite common sense is often wrong. I believe this is true, but not a problem for my framework. I make the comparative claim that elite common sense is more trustworthy than the idiosyncratic standards of the vast majority of individual people, not the claim that elite common sense is almost always right. A further consideration is that analogous objections to analogous views fail. For instance, “markets are often wrong in their valuation of assets” is not a good objection to the efficient markets hypothesis. As explained above, the argument that “markets are often wrong” needs to point to specific way in which one can do better than the market in order for it to make sense to place less weight on what the market says than on one’s own judgments.
Objection: the best people are highly unconventional
Another objection I sometimes hear is that the most successful people often pay the least attention to conventional wisdom. I think this is true, but not a problem for my framework. One reason I believe this is that, according to my framework, when you go against elite common sense, what matters is whether elite common sense reasoning standards would justify your opinion if someone following those standards knew about your background, information, and analysis. Though I can’t prove it, I suspect that the most successful people are often depart from elite common sense in ways that elite common sense would endorse if it had access to more information. I also believe that the most successful people tend to pay attention to elite common sense in many areas, and specifically bet against elite common sense in areas where they are most likely to be right.
A second consideration is that going against elite common sense may be a high-risk strategy, so that it is unsurprising if we see the most successful people pursuing it. People who give less weight to elite common sense are more likely to spend their time on pointless activities, join cults, and become crackpots, though they are also more likely to have revolutionary positive impacts. Consider an analogy: it may be that the gamblers who earned the most used the riskiest strategies, but this is not good evidence that you should use a risky strategy when gambling because the people who lost the most also played risky strategies.
A third consideration is that while it may be unreasonable to be too much of an independent thinker in a particular case, being an independent thinker helps you develop good epistemic habits. I think this point has a lot of merit, and could help explain why independent thinking is more common among the most successful people. This might seem like a good reason not to pay much attention to elite common sense. However, it seems to me that you can get the best of both worlds by being an independent thinker and keeping separate track of your own impressions and what elite common sense would make of your evidence. Where conflicts come up, you can try to use elite common sense to guide your decisions.
I feel my view is weakest in cases where there is a strong upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit. Perhaps many crazy-sounding entrepreneurial ideas and scientific hypotheses fit this description. I believe it may make sense to pick a relatively small number of these to bet on, even in cases where you can’t convince elite common sense that you are on the right track. But I also believe that in cases where you really do have a great but unconventional idea, it will be possible to convince a reasonable chunk of elite common sense that your idea is worth trying out.
Objection: elite common sense is wrong about X, and can’t be talked out of it, so your framework should be rejected in general
Another common objection takes the form: view X is true, but X is not a view which elite common sense would give much weight to. Eliezer makes a related argument here, though he is addressing a different kind of deference to common sense. He points to religious beliefs, beliefs about diet, and the rejection of cryonics as evidence that you shouldn’t just follow what the majority believes. My position is closer to “follow the majority’s epistemic standards” than “believe what the majority beliefs,” and closer still to “follow the best people’s epistemic standards without cherry picking “best” to suit your biases,” but objections of this form could have some force against the framework I have defended.
A first response is that unless one thinks there are many values of X in different areas where my framework fails, providing a few counterexamples is not very strong evidence that the framework isn’t helpful in many cases. This is a general issue in philosophy which I think is underappreciated, and I’ve made related arguments in chapter 2 of my dissertation. I think the most likely outcome of a careful version of this attack on my framework is that we identify some areas where the framework doesn’t apply or has to be qualified.
But let’s delve into the question about religion in greater detail. Yes, having some religious beliefs is generally more popular than being an atheist, and it would be hard to convince intelligent religious people to become atheists. However, my impression is that my framework does not recommend believing in God for the following reasons. Here are a number of weak arguments for this claim:
These points rely a lot on my personal experience, could stand to be researched more carefully, and feel uncomfortably close to lousy contrarian excuses, but I think they are nevertheless suggestive. In light of these points, I think my framework recommends that the vast majority of people with religious beliefs should be substantially less confident in their views, recommends modesty for atheists who haven’t tried very hard to be right, and I suspect it allows reasonably high confidence that God doesn’t exist for people who have strong indicators that they have thought carefully about the issue. I think it would be better if I saw a clear and principled way for the framework to push more strongly in the direction of atheism, but the case has enough unusual features that I don’t see this as a major argument against the general helpfulness of the framework.
As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
Conclusion
I’ve outlined a framework for taking account of the distribution of opinions and epistemic standards in the world and discussed some of its strengths and weaknesses. I think the largest strengths of the framework are that it can help you avoid falling prey to idiosyncratic personal biases, and that using it derives benefits from the “wisdom of crowds” effects. The framework is less helpful in:
Some questions for people who want to further develop the framework include: