Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Common sense as a prior

33 Nick_Beckstead 11 August 2013 06:18PM

Introduction

[I have edited the introduction of this post for increased clarity.]

This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:

Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.

The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.

The rough idea is to try find a group of people whose are trustworthy by clear and generally accepted indicators, and then use an impartial combination of the reasoning standards that they use when they are trying to have accurate views. I call this impartial combination elite common sense. I recommend using elite common sense as a prior in two senses. First, if you have no unusual information about a question, you should start with the same opinions as the broad coalition of trustworthy people would have. But their opinions are not the last word, and as you get more evidence, it can be reasonable to disagree. Second, a complete prior probability distribution specifies, for any possible set of evidence, what posterior probabilities you should have. In this deeper sense, I am not just recommending that you start with the same opinions as elite common sense, but also you update in ways that elite common sense would agree are the right ways to update. In practice, we can’t specify the prior probability distribution of elite common sense or calculate the updates, so the framework is most useful from a conceptual perspective. It might also be useful to consider the output of this framework as one model in a larger model combination.

I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.

I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.

In the rest of this post I will:

  1. Outline the framework and offer guidelines for applying it effectively. I explain why I favor relying on the epistemic standards of people who are trustworthy by clear indicators that many people would accept, why I favor paying more attention to what people think than why they say they think it (on the margin), and why I favor stress-testing critical assumptions by attempting to convince a broad coalition of trustworthy people to accept them.
  2. Offer some considerations in favor of using the framework.
  3. Respond to the objection that common sense is often wrong, the objection that the most successful people are very unconventional, and objections of the form “elite common sense is wrong about X and can’t be talked out of it.”
  4. Discuss some limitations of the framework and some areas where it might be further developed. I suspect it is weakest in cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.

continue reading »

How to use "philosophical majoritarianism"

8 jimmy 05 May 2009 06:49AM

The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn't risk information cascades, we're talking about averaging impressions and coming up with the same belief.

To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you're doing it right, your beliefs should get closer and closer to agreement, and you shouldn't be able to predict who is going to fall on which side.

Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we're going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.

This much I think has been said again and again. I'm gonna attempt to describe how.

continue reading »

The Error of Crowds

15 Eliezer_Yudkowsky 01 April 2007 09:50PM

I've always been annoyed at the notion that the bias-variance decomposition tells us something about modesty or Philosophical Majoritarianism.  For example, Scott Page rearranges the equation to get what he calls the Diversity Prediction Theorem:

Collective Error = Average Individual Error - Prediction Diversity

I think I've finally come up with a nice, mathematical way to drive a stake through the heart of that concept and bury it beneath a crossroads at midnight, though I fully expect that it shall someday rise again and shamble forth to eat the brains of the living.

continue reading »