Jack comments on Open Thread: May 2010, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (348)
I have an idea I'd like to discuss that might perhaps be good enough for my first top-level post once it's developed a bit further, but I'd first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I'll post a rough outline here as a request for comments.
It's about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It's a matter of practical interest to most people, where it's basically impossible not to have an opinion. So people have strong opinions, and you basically can't avoid forming one too.
The available hard scientific evidence doesn't say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of course, lots of folks won't have their logic straight, so it's enough that a particular factual belief is perceived to correlate with a popular or unpopular normative belief to be a subject of widespread bias in one or the other direction.)
Finally, the prevailing opinions on the subject have changed heavily through history, both factually and normatively, and people view the normative beliefs prevailing today as enlightened progress over terrible evils of the past.
These conditions of course apply to lots of stuff related to politics, social issues, etc. Now, the exact bias mechanism I have in mind is as follows.
As per the assumptions (3) and (4), people are aware (more or less) that the opinions on the subject in question were very different in the past, both factually and normatively. Since they support the present norms, they'll of course believe that the past norms were evil and good riddance to them. They'll chalk that one up for "progress" -- in their minds, the same vaguely defined historical process that brought us science and prosperity in place of superstition and squalor, improvements that are impossible to deny, has also brought us good and enlightened normative beliefs on this issue instead of the former unfair, harmful, or just plain disturbing norms. However, since the area in question, as we've assumed under (2), is not amenable to a hard-scientific straightening out of facts from bullshit, it's not at all clear that the presently prevailing factual beliefs are not severely biased. In fact, regardless of what normative beliefs one has about it, there is no rational reason at all to believe that the factual beliefs about the topic did not in fact become more remote from reality compared to some point in the past.
And now we get to the troublesome part where the biases get their ironclad armor: arguing that we've actually been increasingly deluding ourselves factually about some such topic ever since some point in the past, no matter how good the argument and evidence presented, will as per (3) and (4) automatically be perceived as an attack on the cherished contemporary normative beliefs by a reactionary moral monster. This will be true in the sense that updating the modern false factual beliefs will undermine some widely accepted consequentialist arguments for the modern normative beliefs -- but regardless, even if one is still committed to these normative beliefs, they should be defended using logic and truth, not bias and falsity. Moreover, since both the normative and factual historical changes in prevailing beliefs have been chalked up to "progress," the argument will be seen as an attack on progress as such, including its parts that have brought indisputable enrichment and true insight, and is thus seen as sacrilege against all the associated high-status ideas, institutions, and people.
To put it as briefly as possible, the bias is against valid arguments presenting evidence that certain historical changes in factual beliefs have been away from reality and towards greater delusions and biases. It rests on:
a biased moralistic reaction to what is perceived as an attack on the modern cherished normative beliefs, and
a bias in favor of ideas (and the associated institutions and individuals, both contemporary and historical) that enjoy the high status awarded by being a contributor to "progress."
What should be emphasized is that this results in factual beliefs being wrong and biased, and the normative beliefs, whatever one's opinion about their ultimate validity, owing lots of their support to factually flawed consequentialist arguments.
Does this make any sense? It's just a quick dump of some three-quarters-baked ideas, but I'd like to see if it can be refined and expanded into an article.
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That's an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases -- the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don't appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills -- who, compared to other people, have much less difficulty with overcoming their biases once they're pointed out and critically dissecting all sorts of unpleasant questions -- tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they'll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn't even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.