Followup to: Contrarian Status Catch-22
Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11. What else can you infer about them? Are they more or less likely than average to believe in homeopathy?
I couldn't cite an experiment to verify it, but it seems likely that:
- There are persistent character traits which contribute to someone being willing to state a contrarian point of view.
- All else being equal, if you know that someone advocates one contrarian view, you can infer that they are more likely than average to have other contrarian views.
All sorts of obvious disclaimers can be included here. Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view. Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera. Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.
(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don't correlate with each other, then I'm wrong and no c-factor exists.)
Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics. What else can you infer about them?
Well, one possible reason for believing in the many-worlds interpretation is that, as a general rule of cognitive conduct, you investigated the issue and thought about it carefully; and you learned enough quantum mechanics and probability theory to understand why the no-worldeaters advocates call their theory the strictly simpler one; and you're reflective enough to understand how a deeper theory can undermine your brain's intuition of an apparently single world; and you listen to the physicists who mock many-worlds and correctly assess that these physicists are not to be trusted. Then you believe in many-worlds out of general causes that would operate in other cases - you probably have a high correct contrarian factor - and we can infer that you're more likely to be an atheist.
It's also possible that you thought many-worlds means "all the worlds I can imagine exist" and that you decided it'd be cool if there existed a world where Jesus is Batman, therefore many-worlds is true no matter what the average physicist says. In this case you're just believing for general contrarian reasons, and you're probably more likely to believe in homeopathy as well.
A lot of what we do around here can be thought of as distinguishing the correct contrarian cluster within the contrarian cluster. In fact, when you judge someone's rationality by opinions they post on the Internet - rather than observing their day-to-day decisions or life outcomes - what you're trying to judge is almost entirely cc-factor.
It seems indubitable that, measured in raw bytes, most of the world's correct knowledge is not contrarian correct knowledge, and most of the things that the majority believes (e.g. 2 + 2 = 4) are correct. You might therefore wonder whether it's really important to try to distinguish the Correct Contrarian Cluster in the first place - why not just stick to majoritarianism? The Correct Contrarian Cluster is just the place where the borders of knowledge are currently expanding - not just that, but merely the sections on the border where battles are taking place. Why not just be content with the beauty of settled science? Perhaps we're just trying to signal to our fellow nonconformists, rather than really being concerned with truth, says the little copy of Robin Hanson in my head.
My primary personality, however, responds as follows:
- Religion
- Cryonics
- Diet
In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there's really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don't want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.
Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by "outside indicators" - as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project. Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is "These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive." (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return - note that I don't say anything like this about AI, so this is not a plea I have use for myself!) Mostly, I tend to rely on analyzing the actual arguments; meta should be spice, not meat.
However, failing analysis of actual arguments, another method would be to try and distinguish the Correct Contrarian Cluster by plain old-fashioned... clustering. In a sense, we do this in an ad-hoc way any time we trust someone who seems like a smart contrarian. But it would be possible to do it more formally - write down a big list of contrarian views (some of which we are genuinely uncertain about), poll ten thousand members of the intelligentsia, and look at the clusters. And within the Contrarian Cluster, we find a subcluster where...
...well, how do we look for the Correct Contrarian subcluster?
One obvious way is to start with some things that are slam-dunks, and use them as anchors. Very few things qualify as slam-dunks. Cryonics doesn't rise to that level, since it involves social guesses and values, not just physicalism. I can think of only three slam-dunks off the top of my head:
- Atheism: Yes.
- Many-worlds: Yes.
- "P-zombies": No.
These aren't necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.
Of course there are also slam-dunks like:
- Natural selection: Yes.
- World Trade Center rigged with explosives: No.
But these probably aren't the right kind of controversy to fine-tune the location of the Correct Contrarian Cluster.
A major problem with the three slam-dunks I listed is that they all seem to have more in common with each other than any of them have with, say, dietary science. This is probably because of the logical, formal character which makes them slam dunks in the first place. By expanding the field somewhat, it would be possible to include slightly less slammed dunks, like:
- Rorschach ink blots: No.
But if we start expanding the list of anchors like this, we run into a much higher probability that one of our anchors is wrong.
So we conduct this massive poll, and we find out that if someone is an atheist and believes in many-worlds and does not believe in p-zombies, they are much more likely than the average contrarian to think that low-energy nuclear reactions (the modern name for cold fusion research) are real. (That is, among "average contrarians" who have opinions on both p-zombies and LENR in the first place!) If I saw this result I would indeed sit up and say, "Maybe I should look into that LENR stuff more deeply." I've never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.
There are much more clever things you could do with the dataset. If someone believes most things that atheistic many-worlder zombie-skeptics believe, but isn't a many-worlder, you probably want to know their opinion on infrequently considered topics. (The first thing I'd probably try would be SVD to see if it isolates a "correctness factor", since it's simple and worked famously well on the Netflix dataset.)
But there are also simpler things we could do using the same principle. Let's say we want to know whether the economy will recover, double-dip or crash. So we call up a thousand economists, ask each one "Do you have a strong opinion on whether the many-worlds interpretation is correct?", and see if the economists who have a strong opinion and answer "Yes" have a different average opinion from the average economist and from economists who say "No".
We might not have this data in hand, but it's the algorithm you're approximating when you notice that a lot of smart-seeming people assign much higher than average probabilities to cryonics technology working.
There might be a certain kind of people with high-quality suggestions on what novel things to investigate. (I expect these people have more to offer than lists of beliefs.) We have effective g-factor tests that allow to reliably find smart people in general population, and to construct groups of especially smart people. This post suggests that there might be a similarly effective way to estimate people's rationality, a "cc-factor", the ability to find and adopt correct beliefs even if they go against conventional wisdom, not necessarily as a skill, but as a predisposition. This may go a long way towards building the strength of the rationality cause.