Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Common sense as a prior

33 Nick_Beckstead 11 August 2013 06:18PM

Introduction

[I have edited the introduction of this post for increased clarity.]

This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:

Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.

The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.

The rough idea is to try find a group of people whose are trustworthy by clear and generally accepted indicators, and then use an impartial combination of the reasoning standards that they use when they are trying to have accurate views. I call this impartial combination elite common sense. I recommend using elite common sense as a prior in two senses. First, if you have no unusual information about a question, you should start with the same opinions as the broad coalition of trustworthy people would have. But their opinions are not the last word, and as you get more evidence, it can be reasonable to disagree. Second, a complete prior probability distribution specifies, for any possible set of evidence, what posterior probabilities you should have. In this deeper sense, I am not just recommending that you start with the same opinions as elite common sense, but also you update in ways that elite common sense would agree are the right ways to update. In practice, we can’t specify the prior probability distribution of elite common sense or calculate the updates, so the framework is most useful from a conceptual perspective. It might also be useful to consider the output of this framework as one model in a larger model combination.

I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.

I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.

In the rest of this post I will:

  1. Outline the framework and offer guidelines for applying it effectively. I explain why I favor relying on the epistemic standards of people who are trustworthy by clear indicators that many people would accept, why I favor paying more attention to what people think than why they say they think it (on the margin), and why I favor stress-testing critical assumptions by attempting to convince a broad coalition of trustworthy people to accept them.
  2. Offer some considerations in favor of using the framework.
  3. Respond to the objection that common sense is often wrong, the objection that the most successful people are very unconventional, and objections of the form “elite common sense is wrong about X and can’t be talked out of it.”
  4. Discuss some limitations of the framework and some areas where it might be further developed. I suspect it is weakest in cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.

continue reading »

Better Disagreement

70 lukeprog 24 October 2011 09:13PM

Honest disagreement is often a good sign of progress.

- Gandhi

 

Now that most communication is remote rather than face-to-face, people are comfortable disagreeing more often. How, then, can we disagree well? If the goal is intellectual progress, those who disagree should aim not for name-calling but for honest counterargument.

To be more specific, we might use a disagreement hierarchy. Below is the hierarchy proposed by Paul Graham (with DH7 added by Black Belt Bayesian).1

 

DH0: Name-Calling. The lowest form of disagreement, this ranges from "u r fag!!!" to "He’s just a troll" to "The author is a self-important dilettante."

DH1: Ad Hominem. An ad hominem ('against the man') argument won’t refute the original claim, but it might at least be relevant. If a senator says we should raise the salary of senators, you might reply: "Of course he’d say that; he’s a senator." That might be relevant, but it doesn’t refute the original claim: "If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?"

DH2: Responding to Tone. At this level we actually respond to the writing rather than the writer, but we're responding to tone rather than substance. For example: "It’s terrible how flippantly the author dimisses theology."

DH3: Contradiction. Graham writes: "In this stage we finally get responses to what was said, rather than how or by whom. The lowest form of response to an argument is simply to state the opposing case, with little or no supporting evidence." For example: "It’s terrible how flippantly the author dismisses theology. Theology is a legitimate inquiry into truth."

DH4: Counterargument. Finally, a form of disagreement that might persuade! Counterargument is "contradiction plus reasoning and/or evidence." Still, counterargument is often directed at a minor point, or turns out to be an example of two people talking past each other, as in the parable about a tree falling in the forest.

DH5: Refutation. In refutation, you quote (or paraphrase) a precise claim or argument by the author and explain why the claim is false, or why the argument doesn’t work. With refutation, you're sure to engage exactly what the author said, and offer a direct counterargument with evidence and reason.

DH6: Refuting the Central Point. Graham writes: "The force of a refutation depends on what you refute. The most powerful form of disagreement is to refute someone’s central point." A refutation of the central point may look like this: "The author’s central point appears to be X. For example, he writes 'blah blah blah.' He also writes 'blah blah.' But this is wrong, because (1) argument one, (2) argument two, and (3) argument three."

DH7: Improve the Argument, then Refute Its Central Point. Black Belt Bayesian writes: "If you’re interested in being on the right side of disputes, you will refute your opponents' arguments. But if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse."2 Also see: The Least Convenient Possible World.

 

Having names for biases and fallacies can help us notice and correct them, and having labels for different kinds of disagreement can help us zoom in on the parts of a disagreement that matter.

continue reading »

Metacontrarian Metaethics

2 Will_Newsome 20 May 2011 05:36AM

Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.

Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.

Problem 1: Torture versus specks

Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:

"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."

You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...

(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)

Bonus problem 1: Taking trolleys seriously

"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)

The Irrationality Game

38 Will_Newsome 03 October 2010 02:43AM

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

continue reading »

Abnormal Cryonics

56 Will_Newsome 26 May 2010 07:43AM

Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)

It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)

continue reading »

But Somebody Would Have Noticed

36 Alicorn 04 May 2010 06:56PM

When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with "but somebody would have noticed!", beware this temptation.  If you're hearing it, somebody noticed.

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar.  I am not trying to "soften you up" for a future surprise with this post.

1. Wednesday

Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

Auntie Alicorn says, "Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church."

And Wednesday says, "Don't be silly.  If God didn't exist, don't you think somebody would have noticed?"

continue reading »

Navigating disagreement: How to keep your eye on the evidence

37 AnnaSalamon 24 April 2010 10:47PM

Heeding others' impressions often increases accuracy.  But "agreement"  and "majoritarianism" are not magic;  in a given circumstance, agreement is or isn't useful for *intelligible* reasons. 

You and four other contestants are randomly selected for a game show.  The five of you walk into a room.  Each of you is handed a thermometer drawn at random from a box; each of you, also, is tasked with guessing the temperature of a bucket of water.  You’ll each write your guess at the temperature on the card; each person who is holding a card that is within 1° of the correct temperature will win $1000.

The four others walk to the bucket, place their thermometers in the water, and wait while their thermometers equilibrate.  You follow suit.  You can all see all of the thermometers’ read-outs: they’re fairly similar, but a couple are a degree or two off from the rest.  You can also watch, as each of your fellow-contestants stares fixedly at his or her own thermometer and copies its reading (only) onto his or her card.

Should you:

  1. Write down the reading on your own thermometer, because it’s yours;
  2. Write down an average* thermometer reading, because probably the more accurate thermometer-readings will cluster;
  3. Write down an average of the answers on others’ cards, because rationalists should try not to disagree;
  4. Follow the procedure everyone else is following (and so stare only at your own thermometer) because rationalists should try not to disagree about procedures?
continue reading »

Understanding your understanding

69 SilasBarta 22 March 2010 10:33PM

Related to: Truly Part of You, A Technical Explanation of Technical Explanation

Partly because of LessWrong discussions about what really counts as understanding (some typical examples), I came up with a scheme to classify different levels of understanding so that posters can be more precise about what they mean when they claim to understand -- or fail to understand -- a particular phenomenon or domain.

 

Each level has a description so that you know if you meet it, and tells you what to watch out for when you're at or close to that level.  I have taken the liberty of naming them after the LW articles that describe what such a level is like.

 

Level 0: The "Guessing the Teacher's Password" Stage

 

Summary: You have no understanding, because you don't see how any outcome is more or less likely than any other.

continue reading »

"Life Experience" as a Conversation-Halter

11 Seth_Goldin 18 March 2010 07:39PM

Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic.  Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority.  In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.

The claim that more "life experience" will completely reverse an opinion indicates that the person making such a claim believes that opinions from others are based primarily on accumulating anecdotes, perhaps derived from extensive availability bias.  It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't.  Many can confirm this, including Haidt, Kahneman, and Tversky.

When an opponent appeals to more "life experience," it's a last resort, and it's a conversation halter.  This tactic is used when an opponent is cornered.  The claim is nearly an outright acknowledgment of moving to exit the realm of rational debate.  Why stick to rational discourse when you can shift to trading anecdotes?  It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims.  As rhetoric, this is frustratingly effective, but it's logically rude.

Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument.  Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension.  E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, "The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this."

Individual vs. Group Epistemic Rationality

23 Wei_Dai 02 March 2010 09:46PM

It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.

We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.

continue reading »

View more: Next