If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.
If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.
This should be clear if A is the koran and B is the bible.
Positions that fundamentally disagree don't combine in dependent aspects on which they agree. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.
An exception to this is if experts agree on something for the same proximal reasons. If pharmacists were split into camps that disagreed on what atoms fundamentally were, but agreed on how chemistry and biology worked, then we could add those camps together as authorities on what the effect of a drug would be.
If we're going to add up expert views, we need to add up what experts consider important about a question and agree on, not individual features of their conclusions.
Some differing reasons can be additive: Evolution has support from many fields. We can add the analysis of all these experts together because the paleontologists do not generally dispute the arguments of geneticists.
Different people might justify vegetarianism by citing the suffering of animals, health benefits, environmental impacts, or purely spiritual concerns. As long as there isn't a camp of vegetarians that claim it does not have e.g. redeeming health benefits, we can more or less add all those opinions together.
We shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.
Huh? Why not?
I have heard there are some Bayesians here. I think this is rightly treated by saying: A = > G, B => G, P(A)=1/3, P(B) = 1/3, P(A & B) = 0, therefore P(G) is at least 2/3.
(I don't actually mean that P(A&B) = 0; I mean they aren't independent, and expert belief in A is anti-correlated with expert belief in B. The semantics are a bit fiddly, but I think at the end of the day you should use the above calculations.)
This is how Bayes networks work. You sum over the different causal chains and see the numbers that pop out at the end. You don't try to establish what's actually true for the nodes inside the network.
If your reasoning were correct, then it would still be correct even if the choices were G, H, I, J, K, L, M, and N, with equal priors. Would you still say the expert opinions on A and B cancel each other out, making G no more likely than any of the other 7 choices?
Correct me if I'm wrong here, but you don't seem to have any good reason for assuming P(A)=1/3.
It only works if you assume that the probability of a view being correct is equal to the proportion of experts that support it (perhaps you believe that one expert is omniscient and the others are just making uneducated guesses). If you're going to assume that, you might as well shorten the argument by just pointing out that P(G)=2/3 since 2/3 of the experts agree with G.
If we instead start from a prior more like that of the OP, one which says: P(argument X is co... (read more)