Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Reverse engineering of belief structures

5 Post author: Stefan_Schubert 26 August 2014 06:00PM

(Cross-posted from my blog.)

Since some belief-forming processes are more reliable than others, learning by what processes different beliefs were formed is for several reasons very useful. Firstly, if we learn that someone's belief that p (where p is a proposition such as "the cat is on the mat") was formed a reliable process, such as visual observation under ideal circumstances, we have reason to believe that p is probably true. Conversely, if we learn that the belief that p was formed by an unreliable process, such as motivated reasoning, we have no particular reason to believe that p is true (though it might be - by luck, as it were). Thus we can use knowledge about the process that gave rise to the belief that p to evaluate the chance that p is true.

Secondly, we can use knowledge about belief-forming processes in our search for knowledge. If we learn that some alleged expert's beliefs are more often than not caused by unreliable processes, we are better off looking for other sources of knowledge. Or, if we learn that the beliefs we acquire under certain circumstances - say under emotional stress - tend to be caused by unreliable processes such as wishful thinking, we should cease to acquire beliefs under those circumstances.

Thirdly, we can use knowledge about others' belief-forming processes to try to improve them. For instance, if it turns out that a famous scientist has used outdated methods to arrive at their experimental results, we can announce this publically. Such "shaming" can be a very effective means to scare people to use more reliable methods, and will typically not only have an effect on the shamed person, but also on others who learn about the case. (Obviously, shaming also has its disadvantages, but my impression is that it has played a very important historical role in the spreading of reliable scientific methods.)

 

A useful way of inferring by what process a set of beliefs was formed is by looking at its structure. This is a very general method, but in this post I will focus on how we can infer that a certain set of beliefs most probably was formed by (politically) motivated cognition. Another use is covered here and more will follow in future posts.

Let me give two examples. Firstly, suppose that we give American voters the following four questions:

  1. Do expert scientists mostly agree that genetically modified foods are safe?
  2. Do expert scientists mostly agree that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities?
  3. Do expert scientists mostly agree that global temperatures are rising due to human activities?
  4. Do expert scientists mostly agree that the "intelligent design" theory is false?

The answer to all of these questions is "yes".* Now suppose that a disproportionate number of republicans answer "yes" to the first two questions, and "no" to the third and the fourth questions, and that a disproportionate number of democrats answer "no" to the first two questions, and "yes" to the third and the fourth questions. In the light of what we know about motivated cognition, these are very suspicious patterns or structures of beliefs, since that it is precisely the patterns we would expect them to arrive at given the hypothesis that they'll acquire whatever belief on empirical questions that suit their political preferences. Since no other plausibe hypothesis seem to be able to explain these patterns as well, this confirms this hypothesis. (Obviously, if we were to give the voters more questions and their answers would retain their one-sided structure, that would confirm the hypothesis even stronger.)

Secondly, consider a policy question - say minimum wages - on which a number of empirical claims have bearing. For instance, these empirical claims might be that minimum wages significantly decrease employers' demand for new workers, that they cause inflation, that they significantly increase the supply of workers (since they provide stronger incentives to work) and that they significantly reduce workers' tendency to use public services (since they now earn more). Suppose that there are five such claims which tell in favour of minimum wages and five that tell against them, and that you think that each of them has a roughly 50 % chance of being true. Also, suppose that they are probabilistically independent of each other, so that learning that one of them is true does not affect the probabilities of the other claims.

Now suppose that in a debate, all proponents of minimum wages defend all of the claims that tell in favour of minimum wages, and reject all of the claims that tell against them, and vice versa for the opponents of minimum wages. Now this is a very surprising pattern. It might of course be that one side is right across the board, but given your prior probability distribution (that the claims are independent and have a 50 % probability of being true) a more reasonable interpretation of the striking degree of coherence within both sides is, according to your lights, that they are both biased; that they are both using motivated cognition. (See also this post for more on this line of reasoning.)

The difference between the first and the second case is that in the former, your hypothesis that the test-takers are biased is based on the fact that they are provably wrong on certain questions, whereas in the second case, you cannot point to any issue where any of the sides is provably wrong. However, the patterns of their claims are so improbable given the hypothesis that they have reviewed the evidence impartially, and so likely given the hypothesis of bias, that they nevertheless strongly confirms the latter. What they are saying is simply "too good to be true".


These kinds of arguments, in which you infer a belief-forming process from a structure of beliefs (i.e you reverse engineer the beliefs), have of course always been used. (A salient example is Marxist interpretations of "bourgeois" belief structures, which, Marx argued, supported their material interests to a suspiciously high degree.) Recent years have, however, seen a number of developments that should make them less speculative and more reliable and useful.

Firstly, psychological research such as Tversky and Kahneman's has given us a much better picture of the mechanisms by which we acquire beliefs. Experiments have shown that we fall prey to an astonishing list of biases and identified which circumstances that are most likely to trigger them. 

Secondly, a much greater portion of our behaviour is now being recorded, especially on the Internet (where we spend an increasing share of our time). This obviously makes it much easier to spot suspicious patterns of beliefs.

Thirdly, our algorithms for analyzing behaviour are quickly improving. FiveLabs recently launched a tool that analyzes your big five personality traits on the basis of your Facebook posts. Granted, this tool does not seem completely accurate, and inferring bias promises to be a harder task (since the correlations are more complicated than that between usage of exclamation marks and extraversion, or that betwen using words such as "nightmare" and "sick of" and neuroticism). Nevertheless, better algorithms and more computer power will take us in the right direction.

 

In my view, there is thus a large untapped potential to infer bias from the structure of people's beliefs, which in turn would be inferred from their online behaviour. In coming posts, I intend to flesh out my ideas on this in some more details. Any comments are welcome and might be incorporated in future posts.

 

* The second and the third questions are taken from a paper by Dan Kahan et al, which refers to the US National Academy of Sciences (NAS) assessment of expert scientists' views on these questions. Their study shows that many conservatives don't believe that experts agree on climate change, whereas a fair number of liberals think experts don't agree that nuclear storage is safe, confirming the hypothesis that people let their political preferences influence their empirical beliefs. The assessment of expert consensus on the first and fourth question are taken from Wikipedia.

Asking people what they think about the expert consensus on these issues, rather than about the issues themselves, is good idea, since it's much easier to come to an agreement on what the true answer is on the former sort of question. (Of course, you can deny that professors from prestigious universities count as expert scientists, but that would be a quite extreme position that few people hold.) 

Comments (34)

Comment author: Agathodaimon 29 August 2014 03:51:56AM 3 points [-]
Comment author: Stefan_Schubert 29 August 2014 10:12:53AM 0 points [-]

Thanks! I do know of this literature but had not seen these articles.

Comment author: Keith_Coffman 31 August 2014 09:37:19PM *  1 point [-]

Interesting stuff. I am all for trying to improve peoples reasoning skills, and understanding how particular people think initially is a good place to start, but I'm a bit concerned about the way you talked about knowledge in here (and where it comes from).

If we learn that some alleged expert's beliefs are more often than not caused by unreliable processes, we are better off looking for other sources of knowledge.

Frankly, I wouldn't really look to any person as a source of knowledge in the way you seem to be implying here.

Here's how knowledge & experts work: There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't. Experts are people who do have the background in a given field, and they usually know what research has been done in their field and can answer questions/make statements with legitimate authority when speaking on the subject with which they are well versed. Once you have a consensus of opinion between many such experts, you have raised the authority of that opinion further because you've reduced the likelihood of one guy misspeaking, making stuff up, being dishonest, etc. Also note that experts talking on subjects outside of their field of study have no more authority than anyone else (though they often are well informed on other subjects) - this is where the argument from authority fallacy comes from, e.g. "Einstein said that the sky is green" ... so what?

I suspect you know all of this already (I don't mean to come off as lecturing too much, just reiterating some baseline stuff)

After all that rambling about experts, the important thing to take away is that the knowledge (and by knowledge in this context I mean being aware of information which corresponds with reality, i.e. the truth) doesn't come from the experts; experts are just the people who go about investigating the truth and report back to the rest of humanity what they've found. In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

All of the examples you've used deal with things which actually do have an objective answer, whether or not we have or feasibly can test them empirically. (also, as a side note, that bit about the 50% chance of being true is ridiculous even if you don't have any knowledge going into it - you would simply say "I don't know if these claims are true")

People definitely have biases, and we should be particularly cautious when dealing with any claims that are related to contentious issues. Further, I'd like to stress the point that just because a large majority of the experts in a field say something it doesn't make it true - but it does mean that we should believe that it is true until new information says otherwise, because frankly an expert consensus is one of the highest certainties we can come up with as a species.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

  • When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)

  • When we want to change someone's mind about something

  • When we want to keep someone's faulty & infectious belief structure from propagating to other people (ex. Dark Side Epistemology) by teaching other people critical thinking/rationality and common mistakes like said structure.

Still, figuring out how people think has always been an interesting area of science that is worth pursuing, and the tools/sample size have gotten a lot bigger since the time of case studies. I hope you find more interesting stuff to share.

Comment author: Misovlogos 02 September 2014 09:17:36PM 1 point [-]

"I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim".

This is an absurd proposition on several accounts. Firstly, a great deal of utterance meaning can only be recovered relative to a particular context, for it has complex and variable uses shifting within and across contexts, i.e. the exchange of agreement formalising marriage is not a mono-semantical reference to an internal psychological state, but does something only understandable relative to a particular convention of marriage. The upshot being that a condition of intelligibility is contextual awareness. Secondly, it is important to at least be aware of the structures of understanding through which particular intellectual subcultures and traditions give rise to scholarly output (i.e. you can't satisfactorily understand and evaluate a Marxist-Leninist work independently the sociological reality of post-Cold War vanguard parties, or modern European intellectual history).

Comment author: Keith_Coffman 03 September 2014 03:03:10AM *  1 point [-]

The meaning of a claim can, in fact, change based on the context. Moreover, the truth of a claim may change with time (for instance, the claim "Elvis is alive" was at one point true and is now false. Also note that, in the context of me making up a simple example of a claim to demonstrate my point, the meaning is likely referring to the famous performer Elvis Presley rather than any person named Elvis.

Thus we can see how there are a few things that we need to keep in mind when we address a claim, much as you have said above. However, the truth of the claim, given that you understand the meaning and you are evaluating it at a particular time, does not depend on the belief structure.

The reason I said "we shouldn't really care how someone formed their beliefs" is because the words that followed are "when evaluating the veracity of a claim," i.e. whether or not it is accurate. This is entirely independent of the person's reasons for making the claim.

Comment author: Misovlogos 03 September 2014 03:32:06AM *  1 point [-]

This appears to in one stroke admit qualification:

"Thus we can see how there are a few things that we need to keep in mind when we address a claim, much as you have said above. However, the truth of the claim, given that you understand the meaning and you are evaluating it at a particular time, does not depend on the belief structure."

And in the next revoke it:

"The reason I said "we shouldn't really care how someone formed their beliefs" is because the words that followed are "when evaluating the veracity of a claim," i.e. whether or not it is accurate. This is entirely independent of the person's reasons for making the claim."

The truthful content of a claim is not independent of the utterances which comprise it, such than an understanding of those utterances is a condition of finding intelligible that claim and thus the candidature of that claim for truth/falsity.

Comment author: Keith_Coffman 03 September 2014 03:40:04AM 1 point [-]

Let me distill this and see if you follow:

We need to know what a claim is actually claiming - that can depend on context.

Given that you do know what a claim is claiming, its veracity does not depend on context, nor the belief structure of the person behind the claim.

Comment author: Misovlogos 03 September 2014 03:50:47AM *  1 point [-]

I understand exactly what you're saying, but the qualification is divergent from your initial statement, from which this discussion arose, and to which you returned in the second paragraph cited above:

"we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim"

A condition of evaluating the veracity of an utterance is to register the utterance as intelligible, for which the aforementioned considerations to context are necessary, i.e. 'how someone formed their beliefs'.

Comment author: Keith_Coffman 03 September 2014 03:59:34AM *  1 point [-]

If it is divergent, then this

Let me distill this and see if you follow: We need to know what a claim is actually claiming - that can depend on context. Given that you do know what a claim is claiming, its veracity does not depend on context, nor the belief structure of the person behind the claim.

is what I meant. To provide an example, (which can quite often help in these situations):

I claim that the earth is approximately round.

You don't need to know how I came to that conclusion in order to evaluate my claim.

Had I claimed something a bit more complex, maybe related to the society that I currently live in, then you would probably need to know something about my society in order to see if my claim was correct. But you actually wouldn't need to know how I came to the conclusion - you just need to know what I'm talking about.

Comment author: Misovlogos 03 September 2014 04:18:44AM *  1 point [-]

I feel like this is circular: you state your claim, I state my rebuttal, you concede in qualification, and then you return to your original claim.

I need to know how you came to that conclusion, which is slightly ambiguous here, in the sense that I can't understand the claim independently of the linguistic practice in terms of which your intended meaning is given.

In the case of basic and well-worn facts about the natural world, I think I understand their utterance - although I could be unaware of a particular convention or idiom - because I am already very aware of the linguistic practices which endow them which intersubjective force (if I was a peasant in the Holy Roman Empire, I would doubtlessly have no idea what you were attempting to convey or do).

Comment author: Keith_Coffman 03 September 2014 04:29:26AM 1 point [-]

Alright, since you could not verify the Earth being round without knowing my belief structure...

2+2 = 4

You don't know my belief structure. Is it true?

I'm not asking you if you know that off the top of your head, I'm asking if you could go out and check to see if it's actually true!

That's what I mean by evaluating a claim - can you verify it? I'm sorry, but it's asinine to say that you cannot verify it because you don't know how I came to the conclusion. You seem to be arguing something about sharing my language as maintaining your point. I'm past that. If you understand the claim, you can test it.

Comment author: Misovlogos 03 September 2014 04:39:12AM *  1 point [-]

I don't really understand what your problem is; to evaluate a claim, you have to find it intelligible, for which you have to know contingent things about the empirical practice of the relevant language-game - which, yes, is pretty much equivalent to the ordinary language statement 'if you understand the claim, you can test it'.

Comment author: Stefan_Schubert 01 September 2014 11:26:02AM 0 points [-]

There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't.

In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).

I wrote a very long post on a related theme - "genetic arguments" - some time ago, by the way.

that bit about the 50% chance of being true is ridiculous even if you don't have any knowledge going into it - you would simply say "I don't know if these claims are true"

Well according to the betting interpretation of degrees of belief, this just means that you would, if rational, be willing to accept bets that are based on the claim in question having a 50 % chance of being true (but not bets based on the claim that it has, say, a 51 % chance of being true). But sure, sometimes it can seem a bit contrived to assign a definite probability to claims you know little about.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.

Also, as I said, if we find out that someone is biased, then we have little reason to use that person as a source of knowledge.

What I want to stress is the need for cognitive economy. We don't have time to check the direct evidence for different claims lots of the time (as you yourself admit above) and therefore have to use assessments of others' reliability. Knowledge about bias is a vital (but not the only) ingredient in our assessments of reliability, and are hence extremely useful.

Comment author: Keith_Coffman 01 September 2014 04:40:13PM 2 points [-]

I'm making a separate reply for the betting thing, only to try to keep the two conversations clean/simple.

Let's muddle through it: If I have a box containing an unknown (to you) number of gumballs and I claim that there are an odd number of gumballs, you would actually be quite reasonable in assigning a 50% chance to my claim being true.

If I claim that the gumballs in the box are blue, would you say there is a 50% chance of my claim being true?

What if I claimed that I ate pizza last night?

You might have a certain level of confidence in my accuracy and my reliability as a person to not lie to you; and, if someone was taking bets, you would probably bet on how likely I am to tell the truth, rather than assuming there was a 50% chance that I ate pizza last night.

If you you then notice that my friend, who was with me last night, claims that I in fact ate pasta, then you have to weigh their reliability against mine, and more importantly now you have to start looking for reasons that we came to different conclusions about the same dinner. And finally, you have to weigh the effort it takes to vet our claims against how much you really care what I ate last night.

So, assuming you are rational, would you bet 50/50 that I ate pizza? Or would you just say "I don't know" and refuse to bet in the first place?

Comment author: Stefan_Schubert 02 September 2014 07:06:31PM *  1 point [-]

This is a bit of a side-track. For the Bayesian interpretation of probability, it's important to be able to assign a prior probability to any event (since otherwise you can't calculate the posterior probability, given some piece of evidence that makes the event more or less probable). They do this using, e.g. the much contested principle of indifference. Some people object to this, and argue along your lines that it's just silly to ascribe probabilities to events we know nothing about. Indeed, the frequentists define an event's probability as the limit of its relative frequency in a large number of trials. Hence, to them, we can't ascribe a probability to a one-off event at all.

Hence there is a huge discussion on this already and I don't think that it's meaningful for us to address it here. Anyway, you do have a point that one should be a bit cautious ascribing definite probabilities to events we know very little about. An alternative can be to say that the probability is somewhere in the interval from x to y, where x and y are some real numbers betwen 0 and 1.

Comment author: Keith_Coffman 02 September 2014 08:43:38PM 1 point [-]

I agree that it is largely off-topic and don't feel like discussing it further here - I would like to point out that the principle of indifference specifies that your list of possibilities must be mutually exclusive and exhaustive. In practice, when dealing with multifaceted things such as claims about the effects of changing the minimum wage, an exhaustive list of possible outcomes would result in an assignment of an arbitrarily small probability according to the principle of indifference. The end effect is that it's a meaningless assignment and you may as well ignore it.

Comment author: Keith_Coffman 01 September 2014 04:26:19PM *  2 points [-]

There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't. In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).

I think we are in agreement but my second statement didn't have the caveats it should have; I doubt you would disagree with the first half, that reality is objective. You disagreed with the second half, that claims should be evaluated based on evidence -- not because it's a false statement, but rather that, in practice, we cannot reasonably be expected to do this for every claim we encounter. I agree. The unstated caveat is that we should trust the experts until there is a reason to think that their claims are poorly founded, i.e. they have demonstrated bias in their work or there is a lack of consensus among experts in a similar field.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.

Hold on now, you did read my bullets right? When we should care is:

  • When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)

Notice that I actually did say suspicion of bias is an exception to the "not caring" statement. In other words, unless we have a reason to suspect a bias, (and/or the second bullet) then we probably won't care. There can be other ways of bad conclusions being drawn; the reason I mention bias is because it is systematic. If we see a trend of a particular person systematically coming to poor conclusions, whatever their reason, then our confidence in their input would fall. On the other hand, experts are human and can make mistakes as well - we should not dismiss someone for being wrong once but for being systematically wrong and unwilling to fix the problem. If we really care about high confidence in something, for instance in the cases where the truth of the claim is important to a lot of people and we want to avoid being mislead if there are a few biased opinions, we seek the consensus.

Now, can we get the consensus all of the time? Unfortunately not. Not even most of the time. So what's our next line of defense? Well, one of them is journalistic integrity; frankly I don't even want to go there, but if done properly there are people whose job it is to sort through these very things - but really let's not go there for now. The last line of defense is yourself and the actual work of checking on things yourself.

If a claim is important enough for you to really care whether or not it's accurate, then you have to be willing to do a little bit of digging yourself. Now I realize that the entire point of this post was to avoid just that thing and to have computers do it automagically; but really, if it is important enough for you to check on it yourself, rather than just trusting your regular sources of information, then would you be willing not to check just because a program said that this guy was unbiased?

That might be a bit of an unfair characterization of what you're discussing, but there is a distinction to be made between using online behavior to measure/understand the general population's belief structure and to check for bias in expert opinions.

I think the idea of understanding the population's belief structures would still be extremely useful in it's own right though, per my second bullet in the exceptions to the "don't care" statement - particularly if someone wants to change a lot of people's minds about something. If you have a campaign (be it political or social), then understanding how people have structured their beliefs would give you a road map for how best to go about changing them in the way you want. To some extent, this is how it's already been done historically, but it was not done via raw data analysis.

Comment author: Stefan_Schubert 02 September 2014 06:58:14PM *  2 points [-]

I feel that this discussion is getting a bit too multifarious, which no doubt has to do with the very abstract nature of my post. I'm not very happy with it. I should probably have started with more comprehensive and clear examples than an abstract and general discussion like this. Anyway, I do intend to give more examples of reverse-engineering-of-belief-structures-examples in the future. Hopefully that'll make it clearer what I'm trying to do. Here's one example of reverse engineering-reasoning I've already given.

I agree that lots of the time we should "do a bit of digging ourselves"; i.e. look at the direct evidence for P rather than on whether those telling us P or not-P are reliable or not. But I also claim that in many cases deference is extremely cost-efficient and useful. You seem to agree with this - good.

...but there is a distinction to be made between using online behavior to measure/understand the general population's belief structure and to check for bias in expert opinions.

Sure. But reverse engineering reasoning can also be used to infer expert bias (as shown in this post).

To some extent, this is how it's already been done historically, but it was not done via raw data analysis.

Yes. People already perform this kind of reverse engineering reasoning, as I said (cf my reference to Marx). What I want to do is to do it more systematically and efficiently.

Comment author: Agathodaimon 30 August 2014 08:43:15AM *  0 points [-]

Gg