The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn't risk information cascades, we're talking about averaging impressions and coming up with the same belief.

To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you're doing it right, your beliefs should get closer and closer to agreement, and you shouldn't be able to predict who is going to fall on which side.

Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we're going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.

This much I think has been said again and again. I'm gonna attempt to describe how.

Imagine for a moment that you are a perfectly rational Bayesian, and you just need data.

First realize that "duplicate people" don't count double. If you make a maximum precision copy of someone, that doesn't make him any more likely to be right- clearly we can do better than averaging over all people with equal weighting.  By the same idea, finding out that a certain train of thought leading to a certain belief is common shouldn't make you proportionally more confident in that idea.  The only reason it might make you any more confident in it is the possibility that its truth leads to its proliferation and therefore its popularity is (weak) evidence.

This explains why we can dismiss the beliefs of the billions of theists. First of all, their beliefs are very well correlated so that all useful information can be learned through only a handful of theists.  Second of all, we understand their arguments and we understand how they formed their beliefs-and have already taken them into account. The reason they continue to disagree is because the situation isn't symmetric - they don't understand the opposing arguments or the causal path that leads one to be a reductionist atheist. 

No wonder "majoritarionism" doesn't seem to work here.

Since we're still pretending to be perfect Bayesians, we only care about people who are fairly predictable (given access to their information) and have information that we don't have. If they don't have any new information, then we can just follow the causal path and say "and here, sir, is where you went wrong.". Even if we don't understand their mind perfectly, we don't take them seriously since it is clear that whatever they were doing, they're doing it wrong.  On the other hand, if the other person has a lot of data, but we have no idea how data affects their beliefs, then we can't extract any useful information.

We only change our beliefs to more closely match theirs when they are not only predictable, but predictably rational. If you know someone is always wrong, then reversing his stupidity can help you get more accurate beliefs, but it won't bring you closer to agreement- just the opposite!

If we stop kidding ourselves and realize that we aren't perfect Bayesian, then we have to start giving credit to how other people think. If you and an epistemic peer come upon the same data set and come to different conclusions, then you have no reason to think that your way of thinking is any more accurate than his (as we assumed he's an epistemic peer).  While you may have different initial impressions, you better be able to converge to the same belief.  And again, on each iteration, it shouldn't be predictable who is going to fall on which side.

If we revisit the cases like religion, then you still understand how they came to their beliefs and exactly why they fail.  So to the extent that you believe you can recognize stupidity when you see it, you still stick to your own belief. Even though you aren't perfect, for this case, you're good enough.

One sentence summary: You want to shift your belief to the average over answers given by predictably rational "Rituals of Cognition"/data set pairs1, not people2.

You weight the different "Rituals Of Cognition"/data pairs by how much you trust the ROC and by how large the data set is.  You must, however, keep in mind that to trust yourself more than average, you have to have a better than average reason to think that you're better than average.

To the extent that everyone has a unique take on the subject, counting people and counting cognitive rituals are equivalent.  But when it comes to a group where all people think pretty close to the same way, then they only get one "vote". 

You can get "bonus points" if you can predict the conditional response of irrational peoples action in response to data and update based on that. For practical purposes though, I don't think much of this happens as not many people are intelligently stupid.

 

ETA: This takes the anthropomorphism out of the loop. We're looking at valid ROC, and polling human beliefs is just a cheap way to find them. If we can come up with other ways of finding them, I expect that to be very valuable. The smart people that impress me most aren't the ones that learn slightly quicker, since everyone else gets there too. The smart people that impress me the most come in where everyone else is stumped and chop Gordian's knot in half with their unique way of thinking about the problem. Can we train this skill?

Footnotes:
1. I'm fully aware of how hoaky this sounds without any real math there, but it seems like it should be formalizable.
If you're just trying to improve human rationality (as opposed to programming AI), the real math would have to be interpreted again anyway and I'm not gonna spend the time right now.

2. Just as thinking identically to your twin doesn't help you get the right answer (and therefore is weighted less), if you can come up with more than one valid way of looking at things, you can expect justifiably be weighted as strongly as a small group of people.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 7:57 PM

I can't be in favor of any philosophical method that involves people thinking less than they do already.

All this talk of "Bayesian rationality" seems to me to be a smokescreen for justifying our favorite beliefs by redefining evidence and then finding evidence* that lets us conclude what we wish.

I strongly suspect that there are general principles that would allow us to predict with high accuracy which topics we could be more accurate by going with the majority on and when going with the majority will lead to error and inaccuracy instead. I'm always somewhat shocked that people aren't more concerned with elucidating those principles.

"philisophical" --> "philosophical"

Doh! fixed.

This assumes that the error terms don't correlate significantly, and that this is a case where Aumann's Agreement applies.

Which considering one of these error terms is the estimation of someone's rationality based little more than a few publicly-stated beliefs is perhaps a dangerous assumption to make.

This assumes that the error terms don't correlate significantly, and that this is a case >where Aumann's Agreement applies.

Which error terms are you referring to, and how would you do better?

Which considering one of these error terms is the estimation of someone's rationality >based little more than a few publicly-stated beliefs is perhaps a dangerous assumption >to make.

Dangerous? It's just that in those cases you have to have wide error bars. You can't expect information to hurt.

As far as error terms, the reason why majority methods are often reliable is because they exploit the typical feature that the correct answer will correlate with itself (which is why we need Aumann's to apply) and that the errors will not correlate significant with each other (which could be false if there is a strong attractor in the solution space - like a narrow pass past a cliff).

If these conditions apply, then your majority will be correct with a high degree of confidence. If not, then your confidence is much lower. The problem is that it is not clear how to determine whether these conditions apply without enough analysis of the problem as to make the majority method largely unnecessary.

Perhaps someone has a quick way, but thing like-indepth understanding of solution spaces and careful Aumann Agreement analysis seem costly pre-requisites for using majority methods. Personally, my approach would be to treat majority methods as potentially useful, but unreliable for these reasons. And to use prior evidence of correct or useful estimates, rather than estimations of rationality to base my weighting of the majority.

Of course the most evident danger comes from treating the methods as more confident than they are. But another danger is that estimating rationality as the basis of your method can easily degrade to taking the majority of favorable positions. Cognitive short-circuiting like this is very, very easy, and in this case the method is especially vulnerable to this sort of short-circuiting unless an extremely solid method of rationality estimation is packaged with it.

Shorter and simpler: if people base their beliefs on other people's beliefs, without independently examining the evidence and reaching their own conclusions, you have easily generate massive consensus based on nothing at all.

Why do you count (ROC, Data) Pairs?

Clearly, if all people rely on the same data, but all use different (but quite sound) cognitive rituals, there is still only one data set.

I'd think that you should first compute the meaning of the data by averaging over all (apparently sound) ROC results, and then update based on that outcome. I.e. if only lunatics saw something, and they all say it means A, then that counts for nothing. If a bunch of trustworthy bayesians see something and they all conclude B, then that counts like one trustworthy bayesian for B. If some trustworthy bayesians and a similar number of (apparently "winning" => implying sound ROC) aliens, who deny to update a la Bayes, say it's B there is still one vote only.

If the aliens and the bayesians saw different data though, that'll make two votes.

It's a question of how much does the variance in data mess up your conclusions compared to the variance in ROC.

If all the variance is in the data, then sure, several valid interpretations of the same data barely outweighs an individual with a unique data set.

However, if the data is largely shared but it's a tough problem, so people hack at it in wildly different ways (eg outside view vs inside view), then you care more about different valid ROC than another slightly different data set.

I intended (though probably failed) to convey the idea of noninteger numbers of votes depending on the degree of correlation between datasets/ROC. If the datasets are 90% overlapping, then you dont get a full vote for adding another. If your ROC are largely overlapping (eg two attempts at outside view), then you only get a small increase in voting power, but if its large (eg inside vs outside) you can get almost another full vote.