If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.
If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.
This should be clear if A is the koran and B is the bible.
Positions that fundamentally disagree don't combine in dependent aspects on which they agree. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.
An exception to this is if experts agree on something for the same proximal reasons. If pharmacists were split into camps that disagreed on what atoms fundamentally were, but agreed on how chemistry and biology worked, then we could add those camps together as authorities on what the effect of a drug would be.
If we're going to add up expert views, we need to add up what experts consider important about a question and agree on, not individual features of their conclusions.
Some differing reasons can be additive: Evolution has support from many fields. We can add the analysis of all these experts together because the paleontologists do not generally dispute the arguments of geneticists.
Different people might justify vegetarianism by citing the suffering of animals, health benefits, environmental impacts, or purely spiritual concerns. As long as there isn't a camp of vegetarians that claim it does not have e.g. redeeming health benefits, we can more or less add all those opinions together.
We shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.
Correct me if I'm wrong here, but you don't seem to have any good reason for assuming P(A)=1/3.
It only works if you assume that the probability of a view being correct is equal to the proportion of experts that support it (perhaps you believe that one expert is omniscient and the others are just making uneducated guesses). If you're going to assume that, you might as well shorten the argument by just pointing out that P(G)=2/3 since 2/3 of the experts agree with G.
If we instead start from a prior more like that of the OP, one which says: P(argument X is correct |the majority of experts agree with X) = 0.9 P(argument X is incorrect |the majority of experts disagree with X) = 0.9
This makes or final estimate of P(G) roughly equal to our prior estimate of P(G | ~A & ~B), which is the OP's point.
Or, to put it another way, one which should work with most reasonable priors:
Define C to be the background information that 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B.
Since belief in A and B anti correlate strongly among experts it is reasonable to assume that P(A&B)=0 (approximately). I will assume this without mentioning it again from now on.
P(G) = P(A)P(G | A & ~B) + P(B)P(G | ~A & B) + P(~A & ~B)*P(G | ~A & ~B), since our estimate now must equal our expectation of what our future estimate would be if we discovered for certain whether and and B were correct.
If A and B are arguments for G that must mean that P(G | A) > P(G | ~A) and P(G | B) > P(G | ~B). Using fairly simple maths we can prove from this and the fact that P(A & B) = 0 that P(G | ~A & ~B) < P(G | A & ~B) and P(G | ~A & ~B) < P(G | ~A & B). This means that as P(~A & ~B) increases P(G) must decrease.
Assuming we place some trust in experts, we must accept that if the majority of experts disagree with an argument then this is evidence against that argument.
If we find that the majority of experts disagree with A this must reduce P(A), and it must increase the weighted average of P(B) and P(~A & ~B). The evidence doesn't distinguish between these other two possibilities, the majority of experts would probably disagree with A whichever of them was true, so both of them should increase.
If we find that the majority of experts disagree with B then by the same argument this must reduce P(B) and increase P(A) and P(~A & ~B).
If C is true then both of the above things happen, and P(~A & ~B) increases twice, so P(~A & ~B | C) > P(~A & ~B).
This means, for reasons established above, that P(G | C) < P(G). The OP is right, this disposition of expert opinion is evidence against G.