42

If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

Positions that fundamentally disagree don't combine in dependent aspects on which they agree. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.

An exception to this is if experts agree on something for the same proximal reasons. If pharmacists were split into camps that disagreed on what atoms fundamentally were, but agreed on how chemistry and biology worked, then we could add those camps together as authorities on what the effect of a drug would be.

If we're going to add up expert views, we need to add up what experts consider important about a question and agree on, not individual features of their conclusions.

Some differing reasons can be additive: Evolution has support from many fields. We can add the analysis of all these experts together because the paleontologists do not generally dispute the arguments of geneticists.

Different people might justify vegetarianism by citing the suffering of animals, health benefits, environmental impacts, or purely spiritual concerns. As long as there isn't a camp of vegetarians that claim it does not have e.g. redeeming health benefits, we can more or less add all those opinions together.

We shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.

Original Thread

New to LessWrong?

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 3:58 PM

I remember (somewhat, details may be a bit foggy) in Richard Feynman's biography of sorts, he tells a story about a time when he served on a committee to recommend new science books for (I think) several grade levels like 6-12. He first wryly notes that he was the only member of the committee to read all of the candidate texts from several publishers in each grade level in their entirety. He ended up recommending science books by a publisher that was not in favor with the rest of the committee, and their principal reason for liking another publisher's books was because two-hundred engineers had participated in a review of some of the same books and their votes pointed to this other recommendation.

So the committee asks him to justify his different recommendation with an appeal to authority - "surely you are not smarter than 200 engineers!". He says something along the lines of no I don't claim to be smarter than the sum of 200 engineers, but thats not what you have here. I am smarter than the average of 200 engineers.

Now clearly, we can't all be experts in everything and have to yield to expert consensus as a matter of practicality - but it should never be assumed to settle anything - apart of course from cases where its a majority of people testing hypothesis and models and finding them in agreement with observations. That has real lasting value - even if the model is later found to be flawed its usually still "good enough" for most observations (e.g. Newton's physics).

I suggest linking to the previous discussion. In particular, Toby Ord had a counterargument, that I don't think you adequately dealt with. You wrote:

If the red supporter contends that all green and blue objects were lost in the color wars, while the blue supporter contends that all objects are fundamentally blue and besides the color wars never happened, then their opinions roughly cancel each other out. (Barring other reasons for me to view one as more rational than the other.)

I don't see why they cancel each other out. Why shouldn't you assign 1/3 probability to "all green and blue objects were lost in the color wars" and 1/3 probability to "all objects are fundamentally blue and besides the color wars never happened", in which case there's 2/3 probability that the object is not green?

So Edited.

Cancel was too strong a word.

It depends on how green justifies it's position, and how that is taken by the other experts.

Suppose also that the Green expert disbelieves both the color wars and fundamental Blueness, and supports green for scientific reasons whose facts are not strongly disputed by the other two sides. The Blue supporter acknowledges it would likely be green if not all things were blue, and the Red supporter the same if not for the color wars.

The green expert has support from 1 or 2 other experts in every reason they hold. The red and blue experts have support from 0 or 1 other experts, and 2 in the case of the evidence for green.

The green expert is more authoritative, because more experts think he's not-crazy on more positions within the field. (that no one disputes green's basic logic is a bonus, even if The Funadmentalist Blues disavowed the 'so-called science', green would still have 1 corroborating opinion for all of their positions.)

It may be the case in this example that not-green would still be the weighted majority position, but at less than 2/3. I'm not sure how to do the math on this.

Here is how I see the math. Let:

  • R = object is red
  • B = object is blue
  • G = object is green
  • W = color war
  • F = fundamentally blue
  • S = scientifically green

And P_G, P_R, P_G be the probability functions of the three experts:

  • P_G(G) = P_G(S) = 1
  • P_R(R) = P_R(W) = 1
  • P_B(B) = P_B(F) = 1

If we take P to be the average of the three probability functions, then P(G)=P(R)=P(B)=P(S)=P(W)=P(F)=1/3.

The Blue supporter acknowledges it would likely be green if not all things were blue, and the Red supporter the same if not for the color wars.

In that case, it would be something like this:

  • P_R(R) = P_R(W) = .99, P_R(G) = P_R(S) = .01
  • P_B(B) = P_B(F) = .99, P_B(G) = P_B(S) = .01

But if you take the average, P(G) still comes out pretty close to 1/3. In order to conclude that P(G)>1/2, I think we need to argue that taking the average of the 3 probability functions isn't the right thing to do. I'm still trying to figure that one out...

It may be the case in this example that not-green would still be the weighted majority position, but at less than 2/3. I'm not sure how to do the math on this.

I would just like to say: Maths is hard work. Not because it is particularly difficult in this case but because I have filled up a page worth of typing with underscores and parenthesis and aren't even finished. So much boring detail!

We shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.

I think the last sentence should be, "That's ignoring a significant portion of their expertise."

If we're going to add up expert views, we need to add up what experts consider important about a question and agree on, not individual features of their conclusions.

I consider this to be the key point and feel it could be summarized by saying, "Show your work." The inherent problem in this is that the reason I defer to experts is because I don't want to check their work.

This also doesn't cover the problem that some experts may carry more weight than others.

One way to solve this is to have a third party track expert agreement to see what conclusions are best supported.

You might do this iteratively to see which experts track the best conclusions.

So yeah, the more I think about this, the more it seems like PageRank.

Yes, if people are giving opposing arguments for the same position, it is a sign that someone is rationalizing a position.

However, the reason someone rationalizes a position is because his intuition favors that position. If two thirds of the experts favor a position based on their intuitions, but give opposing arguments for it, there is still no reason for me to think that my intuition in favor of the minority position is better than their intuition.

And since people's intuitions are affected, presumably on average in a good way, by the study of a topic, it is reasonable to give some weight to such a majority position, certainly more weight than to my own intuitions on the matter.

You think that we can trust people to form judgments which are good, that they nonetheless can't explain properly? I agree, a little, but I wouldn't want to build anything on evidence of such low quality. Maybe I can accept it for things I don't care about, where understanding the primary evidence is too much effort, but I feel like I'm almost better off ignoring it entirely.

Wasn't the book Blink all about this phenomenon?

Apparently - http://en.wikipedia.org/wiki/Blink_(book)* - although I hadn't heard of it until now. I'm not sure it's an idea that justifies an entire book!

  • anyone know how to quote this url properly using the [ ] ( ) markup?

anyone know how to quote this url properly using the [ ] ( ) markup

\ before )

So: http://en.wikipedia.org/wiki/Blink_(book) )

You can escape special characters with a backslash, use \) for a literal )

I'm not saying that it would be impossible for us to form a better judgment in principle. I am saying that if almost all the experts can't explain the matter properly, there is no reason to think I can do so myself-- I am better off trusting their low quality judgments.

If it's a widely studied matter (just not by me, yet) and truly ALL experts don't seem to be able to transmit their reasons for believing things to me, then I'm going to be very skeptical that there's anything in the field of study to learn at all, unless there's a good argument that I should expect the field to be so enigmatic. I think I'm better off believing nothing about the topic if it's so immune to communication.

It's only when experts agree, and when it seems like I can in principle follow their work and understand the primary evidence, e.g. any advanced science or engineering discipline, that I can trust experts even though they've failed to present an easily digestible course for me to follow.

In the situation under discussion, most experts agree on a conclusion, and disagree about the argument.

It is possible (but not necessary) that this means that nothing can be known about the subject. However, if this is the case, then much less is it the case that I personally can know the opposite, namely by agreeing with the minority.

I agree. It seems to me the world is full of charlatans and fools who pretend to be experts. And many of them will go to great lengths to signal their ersatz-experthood.

I think that there are only two checks on this problem: First, if the expert can justify his beliefs to intelligent but skeptical laypeople. Second, if the expert can consistently make accurate and interesting predictions. Ideally, the expert should be able to do both.

If not, there is a serious risk that the expert and his comrades will drift into charlatanhood.

Where you say "pretend to be experts", do you include those credentialled as experts by accredited institutions? If so, this is the "theologist problem" and your analysis needs I think to cut a little deeper.

Absolutely. For example, as an attorney, I have met attorneys who are pretty much incompetent but who put on airs, throw around fancy lingo, and succeed in convincing a lot of people they know what they are doing.

I met one guy who is listed in "Best Lawyers in America" for his so-called area of expertise; teaches a class in that area at a top law school; and pulls down a big fat salary. And yet he is an idiot who just acts confident.

Perhaps this is less of a problem in the law since there are judges and juries to provide some kind of reality check now and then.

I met one guy who is listed in "Best Lawyers in America" for his so-called area of expertise; teaches a class in that area at a top law school; and pulls down a big fat salary. And yet he is an idiot who just acts confident.

As a current law student, I'm curious about this. Who is he and what does he teach? How do you tell whether someone is actually qualified? How did he get to this position without having any expertise, given that universities, firms, and clients are usually pretty picky and careful about who they hire? (Feel free to private message me if you need to.)

Is this just political disagreement? People often call intelligent, controversial politicians and judges (Bush, Obama, Scalia, Thomas) "idiots" when they usually mean they disagree with the person in question.

Obviously I'm not going to name names. But actually, that's part of the problem: Once somebody has a reputation or credential of being an expert, people are hesitant to publicly question that person's qualifications for fear of damaging themselves. So it can lead to a kind of groupthink.

Anyway, I worked with this guy on a project or two and it quickly became clear he was pretty much clueless. Or at least wildly less qualified than one would think given his credentials. I had a good time laughing about it (in private) with other junior attorneys.

And no, it's not a political disagreement. I have no idea what this guy's politics are like. (Except of course for guesses based on his social class and millieu). But I do agree with you that a lot of people are biased in this way. I myself am regularly accused of being stupid or of being a paid shill during internet debates on politically charged issues and I agree 100% with whomever said that politics is the mindkiller.

Anyway, since you are a law student there is a decent chance you will meet a professor who doesn't live up to the hype, so to speak. Also, law school provides another example of the expert problem.

Law professors are supposed to be experts in the law. And yet if 90% of law professors said "the law should be X," should one accept it? I myself am skeptical. Among other things, law professors need to be socially accepted by other law professors. Further, law professors got where they are by being the sort of person who is socially accepted by other law professors. It seems to me these factors probably inform their thinking, especially on politically charged issues.

So does that mean that the argument that a majority of people believe in a deity is a good one? (if inconclusive) That the argument that they all believe in different contradictory deities is a bad argument?

My comment, and most of your post, was on the majority of experts. I would say that if the majority of "experts on the cause of the world" believed in a deity, that would be a good argument. But in fact it is not very clear who the experts are in this case. So the argument is merely a general majoritarian argument and not an argument from the experts. Still, as I've said in the past, I think such a general majoritarian argument is a good argument-- just not a very strong one.

The argument that they believe in contradictory deities is a good argument in the sense that it greatly weakens the majoritarian argument: if the majority all believed in the same deity, and for the same reasons, their position would be much stronger. The argument about contradictory deities however is not good if it is intended to be a positive argument for atheism (except in the general sense that weakening arguments for a a deity is automatically increasing the probability of atheism.)

[-][anonymous]14y00

In this case, intuition should be recognized as valid source of evidence, and experts should be able to agree on that, studying these intuitions directly, instead of poisoning the signal and diverting the attention with rationalization.

[-][anonymous]14y00

This is assuming intuition actually tells you what you want to know, in which case you probably need subjects and not experts.

Different people might justify vegetarianism by citing the suffering of animals, health benefits, environmental impacts, or purely spiritual concerns. As long as there isn't a camp of vegetarians that claim it does not have e.g. redeeming health benefits, we can more or less add all those opinions together.

I think that this is actually very close to the bible/koran example. If people reach similar conclusions from different reasons, they're probably just rationalizing. It would be very surprising if truly independent aspects of vegetarianism all happen to point the same way.

I guess this means that you and I reach the same conclusion about the bible/koran example, but for different reasons ;-)

ETA: I am more negative about vegetarian evidence than James, but I am also more positive about the theists (cf Unknowns, Michael Vassar). In both cases, I say that they are mistaken about why they hold the beliefs they do, but that doesn't necessarily mean the reason is bad. So maybe my position does not apply to my agreement with James.

But believers in the Bible really do reject the Koran, and believers in the Koran reject (the extant versions of) the Bible (which they claim are corrupted, as can be "proved" by noticing that they disagree with the Koran). Whereas in the vegetarianism examples, there is no mutual rejection, just people who emphasise a particular point while also accepting others. Many of the people who go veggie to prevent animal suffering would also agree that it causes environmental damage. It's just that their own emotional hierarchy places animal suffering above environmental damage, not a real disagreement about the state of the world (same map of the territory, different preferred locations).

It would be much more credible if vegetarians said, for instance, that the suffering of animals, health benefits, environmental impacts, and purely spiritual concerns all involved considerations that pointed both towards and away from vegetarianism but that the balance of the arguments points towards it.

In practice, as far as I can tell, environmental concerns pretty much all point towards vegetarianism with some shellfish and other abundant sea life.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

Huh? Why not?

I have heard there are some Bayesians here. I think this is rightly treated by saying: A = > G, B => G, P(A)=1/3, P(B) = 1/3, P(A & B) = 0, therefore P(G) is at least 2/3.

(I don't actually mean that P(A&B) = 0; I mean they aren't independent, and expert belief in A is anti-correlated with expert belief in B. The semantics are a bit fiddly, but I think at the end of the day you should use the above calculations.)

This is how Bayes networks work. You sum over the different causal chains and see the numbers that pop out at the end. You don't try to establish what's actually true for the nodes inside the network.

If your reasoning were correct, then it would still be correct even if the choices were G, H, I, J, K, L, M, and N, with equal priors. Would you still say the expert opinions on A and B cancel each other out, making G no more likely than any of the other 7 choices?

Correct me if I'm wrong here, but you don't seem to have any good reason for assuming P(A)=1/3.

It only works if you assume that the probability of a view being correct is equal to the proportion of experts that support it (perhaps you believe that one expert is omniscient and the others are just making uneducated guesses). If you're going to assume that, you might as well shorten the argument by just pointing out that P(G)=2/3 since 2/3 of the experts agree with G.

If we instead start from a prior more like that of the OP, one which says: P(argument X is correct |the majority of experts agree with X) = 0.9 P(argument X is incorrect |the majority of experts disagree with X) = 0.9

This makes or final estimate of P(G) roughly equal to our prior estimate of P(G | ~A & ~B), which is the OP's point.

Or, to put it another way, one which should work with most reasonable priors:

Define C to be the background information that 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B.

Since belief in A and B anti correlate strongly among experts it is reasonable to assume that P(A&B)=0 (approximately). I will assume this without mentioning it again from now on.

P(G) = P(A)P(G | A & ~B) + P(B)P(G | ~A & B) + P(~A & ~B)*P(G | ~A & ~B), since our estimate now must equal our expectation of what our future estimate would be if we discovered for certain whether and and B were correct.

If A and B are arguments for G that must mean that P(G | A) > P(G | ~A) and P(G | B) > P(G | ~B). Using fairly simple maths we can prove from this and the fact that P(A & B) = 0 that P(G | ~A & ~B) < P(G | A & ~B) and P(G | ~A & ~B) < P(G | ~A & B). This means that as P(~A & ~B) increases P(G) must decrease.

Assuming we place some trust in experts, we must accept that if the majority of experts disagree with an argument then this is evidence against that argument.

If we find that the majority of experts disagree with A this must reduce P(A), and it must increase the weighted average of P(B) and P(~A & ~B). The evidence doesn't distinguish between these other two possibilities, the majority of experts would probably disagree with A whichever of them was true, so both of them should increase.

If we find that the majority of experts disagree with B then by the same argument this must reduce P(B) and increase P(A) and P(~A & ~B).

If C is true then both of the above things happen, and P(~A & ~B) increases twice, so P(~A & ~B | C) > P(~A & ~B).

This means, for reasons established above, that P(G | C) < P(G). The OP is right, this disposition of expert opinion is evidence against G.

[-][anonymous]14y00

I'm not sure if this changes the math but A-->G and B-->G aren't given. P(A) conjoined to P(A-->G) should = 1/3. Same for P(B) and P (B-->G). No?

Lets say you are an intelligence officer collecting information about a land that has three tribes A, B and C. A is the dominating tribe.

Your agents in B are telling you - Give us more support, we are more oppressed than C. A is massing up for a final solution against us. Your agents in C are telling you - Give us more support, we are more oppressed than B. A is massing up for a final solution against us. Your agents in A are telling you - We are oppressing nobody, but troops are being built up because of terrorist activities committed by B and C.

So, B, C and A have contradictory accounts but their conclusions are adding up to the position that A is massing up troops. I don't see a reason to reject a common conclusion here coming from different premises. Premises which will be contradicted if these agents ever face each other.

Or did I misinterpret your argument? Kindly enlighten.

All three sources agree about the fact that troops are being built up in A, and their disagreements are about the interpretation of that fact rather than the evidence for that fact. So it's not a counterexample.

Ok. So, your example holds even when agents in B had said - they are massing 5000 extra troops in our border. Agents in C had said that they are massing 5000 troops at our border (assume these borders are separate) and agents in A (who work at an armament factory) had said production has been ramped up for around 4000 troops.

What can we agree on - there is a production rampup going on for atleast 4000 troops. Position remains uncertain.

Now, take this example to the much more complex example of theism.

95% of humanity believe in a supreme force, and their disagreements are about the interpretation of that force. Written manuscripts are handed down and most people treat these as evidence. For the better part of history religious practices are on. These are the B's and C's of our example. Directly contradictory evidences.

But then, come a bunch of people (the theosophists, new agers, "law of attraction" followers )who say - look, there is no need to totally and completely believe every word of manuscript. Follow some practices sincerely until you feel the presence yourself - something which a whole lot of dedicated believers do everyday. Unfortunately there is no way to mechanise this yet, since it is something like human intelligence, only one working demo, with no other examples, not yet replicated outside the cranium.

The point that they are saying is - There is something real about spirituality. This is evidence/interpretation that A brings to our table. Not quantitatively the same as B or C, but supporting the argument that there is something real going on there.

Maybe we can say that there is something real to spiritual practices/ritual that would be true even if there were no 'G'.

So, if there is something real, what spiritual practices are you adopting as your own?

who say - look, there is no need to totally and completely believe every word of manuscript. Follow some practices sincerely until you feel the presence yourself

And are promptly condemned as heretics by every other faction. If there were a group that made claims that were supported by most other groups, then we might take them as real experts. But if their rationality is in dispute even among people who share their partial conclusion, then they are just another faction.

Is "a supreme force" the kind of thing you can add up like troop movements? A main point of the original argument is that the supreme forces claimed are mutually exclusive, whereas troop counts are not.

If the counter-claim is to be as vague as, "There is something real about spirituality," we can all agree on some level. Some people will go with the level of common problems in human psychology that lead to the delusion of spirituality. Others will go with the existence of a supreme being. Taking these points together and adding them up to "something real" is not solid conceptualization. (Similar problems with adding together the belief in a supreme being and those who explicitly believe in a non-personal supreme force.)

Alternate approach: taking the Simulation Hypothesis seriously means having a significant prior for the existence of some kind of creator. I doubt that theists or people accepting the Simulation Hypothesis would say that their beliefs mostly overlap on the important points.

To what degree though should we consider the fact that a lot of non-experts rationalize a view to be evidence for that view. It seems to me that lots of rationalization is strong evidence for confusion among the people who are rationalizing, and possibly weak evidence that within the dichotomy that they perceive, the rationalized answer is closer to being compelling. For instance, if people rationalize theism this may indicate that they are taking a "no" view on the question "is morality meaningless" and are confused regarding the extension of that premise.

The heuristic of averaging beliefs is clearly poor. We all accept that in principle we should agree on the same truth (subject only to differences in irreducible priors) after we share our evidence and update on the union; if we just average the beliefs of some people (expert or otherwise), we under- or over-count some evidence. But good luck finding a willing expert partner for such an exercise - one both capable of really doing it, and generous enough to go to the expense.

You seem to suggest a slightly more subtle heuristic for combining conflicting experts' views, but it's still imperfect.

For example, suppose there are contested facts X and Y. Suppose all 3 experts agree that X->Q and Y->Q, but 1 holds (X and not Y), 1 (Y and not X), and the other (not X and not Y). I claim that this is effectively a 2/3 vote for Q, even though there's a 2/3 vote against both X and Y, although of course as a practical matter such wild disagreement amongst "experts" makes me suspicious of their credentials :) I think this is acceptable even if X is christianity, Y is islam, and Q is "some sort of afterlife". I just wouldn't make the mistake of doing no evaluation of the evidence for X and Y myself.

Here's where your heuristic would work: 1/3 expert holds "A and A->G", 1/3 holds "B and B->G", and 1/3 denies all 4 statements. This should be interpreted at best as a 1/3 vote for G (maybe you think it's no evidence at all?)

I do like your heuristic.

I would say it requires "A and A->G and not B" and "B and B->G and not A"

such wild disagreement amongst "experts" makes me suspicious of their credentials

I think that's part of what I'm trying to quantify here. when there's little direct evidence(or we don't understand it ourselves), and a lot of thinking, experts are pretty much defined by the opinions of other experts. If we want to guess at the reliability of their conclusions, the only track record we have is how often other experts agree with them.