(Cross-posted from my blog.)
Since some belief-forming processes are more reliable than others, learning by what processes different beliefs were formed is for several reasons very useful. Firstly, if we learn that someone's belief that p (where p is a proposition such as "the cat is on the mat") was formed a reliable process, such as visual observation under ideal circumstances, we have reason to believe that p is probably true. Conversely, if we learn that the belief that p was formed by an unreliable process, such as motivated reasoning, we have no particular reason to believe that p is true (though it might be - by luck, as it were). Thus we can use knowledge about the process that gave rise to the belief that p to evaluate the chance that p is true.
Secondly, we can use knowledge about belief-forming processes in our search for knowledge. If we learn that some alleged expert's beliefs are more often than not caused by unreliable processes, we are better off looking for other sources of knowledge. Or, if we learn that the beliefs we acquire under certain circumstances - say under emotional stress - tend to be caused by unreliable processes such as wishful thinking, we should cease to acquire beliefs under those circumstances.
Thirdly, we can use knowledge about others' belief-forming processes to try to improve them. For instance, if it turns out that a famous scientist has used outdated methods to arrive at their experimental results, we can announce this publically. Such "shaming" can be a very effective means to scare people to use more reliable methods, and will typically not only have an effect on the shamed person, but also on others who learn about the case. (Obviously, shaming also has its disadvantages, but my impression is that it has played a very important historical role in the spreading of reliable scientific methods.)
A useful way of inferring by what process a set of beliefs was formed is by looking at its structure. This is a very general method, but in this post I will focus on how we can infer that a certain set of beliefs most probably was formed by (politically) motivated cognition. Another use is covered here and more will follow in future posts.
Let me give two examples. Firstly, suppose that we give American voters the following four questions:
- Do expert scientists mostly agree that genetically modified foods are safe?
- Do expert scientists mostly agree that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities?
- Do expert scientists mostly agree that global temperatures are rising due to human activities?
- Do expert scientists mostly agree that the "intelligent design" theory is false?
The answer to all of these questions is "yes".* Now suppose that a disproportionate number of republicans answer "yes" to the first two questions, and "no" to the third and the fourth questions, and that a disproportionate number of democrats answer "no" to the first two questions, and "yes" to the third and the fourth questions. In the light of what we know about motivated cognition, these are very suspicious patterns or structures of beliefs, since that it is precisely the patterns we would expect them to arrive at given the hypothesis that they'll acquire whatever belief on empirical questions that suit their political preferences. Since no other plausibe hypothesis seem to be able to explain these patterns as well, this confirms this hypothesis. (Obviously, if we were to give the voters more questions and their answers would retain their one-sided structure, that would confirm the hypothesis even stronger.)
Secondly, consider a policy question - say minimum wages - on which a number of empirical claims have bearing. For instance, these empirical claims might be that minimum wages significantly decrease employers' demand for new workers, that they cause inflation and that they significantly reduce workers' tendency to use public services (since they now earn more). Suppose that there are five such claims which tell in favour of minimum wages and five that tell against them, and that you think that each of them has a roughly 50 % chance of being true. Also, suppose that they are probabilistically independent of each other, so that learning that one of them is true does not affect the probabilities of the other claims.
Now suppose that in a debate, all proponents of minimum wages defend all of the claims that tell in favour of minimum wages, and reject all of the claims that tell against them, and vice versa for the opponents of minimum wages. Now this is a very surprising pattern. It might of course be that one side is right across the board, but given your prior probability distribution (that the claims are independent and have a 50 % probability of being true) a more reasonable interpretation of the striking degree of coherence within both sides is, according to your lights, that they are both biased; that they are both using motivated cognition. (See also this post for more on this line of reasoning.)
The difference between the first and the second case is that in the former, your hypothesis that the test-takers are biased is based on the fact that they are provably wrong on certain questions, whereas in the second case, you cannot point to any issue where any of the sides is provably wrong. However, the patterns of their claims are so improbable given the hypothesis that they have reviewed the evidence impartially, and so likely given the hypothesis of bias, that they nevertheless strongly confirms the latter. What they are saying is simply "too good to be true".
These kinds of arguments, in which you infer a belief-forming process from a structure of beliefs (i.e you reverse engineer the beliefs), have of course always been used. (A salient example is Marxist interpretations of "bourgeois" belief structures, which, Marx argued, supported their material interests to a suspiciously high degree.) Recent years have, however, seen a number of developments that should make them less speculative and more reliable and useful.
Firstly, psychological research such as Tversky and Kahneman's has given us a much better picture of the mechanisms by which we acquire beliefs. Experiments have shown that we fall prey to an astonishing list of biases and identified which circumstances that are most likely to trigger them.
Secondly, a much greater portion of our behaviour is now being recorded, especially on the Internet (where we spend an increasing share of our time). This obviously makes it much easier to spot suspicious patterns of beliefs.
Thirdly, our algorithms for analyzing behaviour are quickly improving. FiveLabs recently launched a tool that analyzes your big five personality traits on the basis of your Facebook posts. Granted, this tool does not seem completely accurate, and inferring bias promises to be a harder task (since the correlations are more complicated than that between usage of exclamation marks and extraversion, or that betwen using words such as "nightmare" and "sick of" and neuroticism). Nevertheless, better algorithms and more computer power will take us in the right direction.
In my view, there is thus a large untapped potential to infer bias from the structure of people's beliefs, which in turn would be inferred from their online behaviour. In coming posts, I intend to flesh out my ideas on this in some more details. Any comments are welcome and might be incorporated in future posts.
* The second and the third questions are taken from a paper by Dan Kahan et al, which refers to the US National Academy of Sciences (NAS) assessment of expert scientists' views on these questions. Their study shows that many conservatives don't believe that experts agree on climate change, whereas a fair number of liberals think experts don't agree that nuclear storage is safe, confirming the hypothesis that people let their political preferences influence their empirical beliefs. The assessment of expert consensus on the first and fourth question are taken from Wikipedia.
Asking people what they think about the expert consensus on these issues, rather than about the issues themselves, is good idea, since it's much easier to come to an agreement on what the true answer is on the former sort of question. (Of course, you can deny that professors from prestigious universities count as expert scientists, but that would be a quite extreme position that few people hold.)
It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).
I wrote a very long post on a related theme - "genetic arguments" - some time ago, by the way.
Well according to the betting interpretation of degrees of belief, this just means that you would, if rational, be willing to accept bets that are based on the claim in question having a 50 % chance of being true (but not bets based on the claim that it has, say, a 51 % chance of being true). But sure, sometimes it can seem a bit contrived to assign a definite probability to claims you know little about.
I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.
Also, as I said, if we find out that someone is biased, then we have little reason to use that person as a source of knowledge.
What I want to stress is the need for cognitive economy. We don't have time to check the direct evidence for different claims lots of the time (as you yourself admit above) and therefore have to use assessments of others' reliability. Knowledge about bias is a vital (but not the only) ingredient in our assessments of reliability, and are hence extremely useful.
I'm making a separate reply for the betting thing, only to try to keep the two conversations clean/simple.
Let's muddle through it: If I have a box containing an unknown (to you) number of gumballs and I claim that there are an odd number of gumballs, you would actually be quite reasonable in assigning a 50% chance to my claim being true.
If I claim that the gumballs in the box are blue, would you say there is a 50% chance of my claim being true?
What if I claimed that I ate pizza last night?
You might have a certain level of confidence in my accuracy and m... (read more)