Followup to: Contrarian Status Catch-22
Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11. What else can you infer about them? Are they more or less likely than average to believe in homeopathy?
I couldn't cite an experiment to verify it, but it seems likely that:
- There are persistent character traits which contribute to someone being willing to state a contrarian point of view.
- All else being equal, if you know that someone advocates one contrarian view, you can infer that they are more likely than average to have other contrarian views.
All sorts of obvious disclaimers can be included here. Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view. Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera. Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.
(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don't correlate with each other, then I'm wrong and no c-factor exists.)
Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics. What else can you infer about them?
Well, one possible reason for believing in the many-worlds interpretation is that, as a general rule of cognitive conduct, you investigated the issue and thought about it carefully; and you learned enough quantum mechanics and probability theory to understand why the no-worldeaters advocates call their theory the strictly simpler one; and you're reflective enough to understand how a deeper theory can undermine your brain's intuition of an apparently single world; and you listen to the physicists who mock many-worlds and correctly assess that these physicists are not to be trusted. Then you believe in many-worlds out of general causes that would operate in other cases - you probably have a high correct contrarian factor - and we can infer that you're more likely to be an atheist.
It's also possible that you thought many-worlds means "all the worlds I can imagine exist" and that you decided it'd be cool if there existed a world where Jesus is Batman, therefore many-worlds is true no matter what the average physicist says. In this case you're just believing for general contrarian reasons, and you're probably more likely to believe in homeopathy as well.
A lot of what we do around here can be thought of as distinguishing the correct contrarian cluster within the contrarian cluster. In fact, when you judge someone's rationality by opinions they post on the Internet - rather than observing their day-to-day decisions or life outcomes - what you're trying to judge is almost entirely cc-factor.
It seems indubitable that, measured in raw bytes, most of the world's correct knowledge is not contrarian correct knowledge, and most of the things that the majority believes (e.g. 2 + 2 = 4) are correct. You might therefore wonder whether it's really important to try to distinguish the Correct Contrarian Cluster in the first place - why not just stick to majoritarianism? The Correct Contrarian Cluster is just the place where the borders of knowledge are currently expanding - not just that, but merely the sections on the border where battles are taking place. Why not just be content with the beauty of settled science? Perhaps we're just trying to signal to our fellow nonconformists, rather than really being concerned with truth, says the little copy of Robin Hanson in my head.
My primary personality, however, responds as follows:
- Religion
- Cryonics
- Diet
In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there's really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don't want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.
Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by "outside indicators" - as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project. Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is "These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive." (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return - note that I don't say anything like this about AI, so this is not a plea I have use for myself!) Mostly, I tend to rely on analyzing the actual arguments; meta should be spice, not meat.
However, failing analysis of actual arguments, another method would be to try and distinguish the Correct Contrarian Cluster by plain old-fashioned... clustering. In a sense, we do this in an ad-hoc way any time we trust someone who seems like a smart contrarian. But it would be possible to do it more formally - write down a big list of contrarian views (some of which we are genuinely uncertain about), poll ten thousand members of the intelligentsia, and look at the clusters. And within the Contrarian Cluster, we find a subcluster where...
...well, how do we look for the Correct Contrarian subcluster?
One obvious way is to start with some things that are slam-dunks, and use them as anchors. Very few things qualify as slam-dunks. Cryonics doesn't rise to that level, since it involves social guesses and values, not just physicalism. I can think of only three slam-dunks off the top of my head:
- Atheism: Yes.
- Many-worlds: Yes.
- "P-zombies": No.
These aren't necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.
Of course there are also slam-dunks like:
- Natural selection: Yes.
- World Trade Center rigged with explosives: No.
But these probably aren't the right kind of controversy to fine-tune the location of the Correct Contrarian Cluster.
A major problem with the three slam-dunks I listed is that they all seem to have more in common with each other than any of them have with, say, dietary science. This is probably because of the logical, formal character which makes them slam dunks in the first place. By expanding the field somewhat, it would be possible to include slightly less slammed dunks, like:
- Rorschach ink blots: No.
But if we start expanding the list of anchors like this, we run into a much higher probability that one of our anchors is wrong.
So we conduct this massive poll, and we find out that if someone is an atheist and believes in many-worlds and does not believe in p-zombies, they are much more likely than the average contrarian to think that low-energy nuclear reactions (the modern name for cold fusion research) are real. (That is, among "average contrarians" who have opinions on both p-zombies and LENR in the first place!) If I saw this result I would indeed sit up and say, "Maybe I should look into that LENR stuff more deeply." I've never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.
There are much more clever things you could do with the dataset. If someone believes most things that atheistic many-worlder zombie-skeptics believe, but isn't a many-worlder, you probably want to know their opinion on infrequently considered topics. (The first thing I'd probably try would be SVD to see if it isolates a "correctness factor", since it's simple and worked famously well on the Netflix dataset.)
But there are also simpler things we could do using the same principle. Let's say we want to know whether the economy will recover, double-dip or crash. So we call up a thousand economists, ask each one "Do you have a strong opinion on whether the many-worlds interpretation is correct?", and see if the economists who have a strong opinion and answer "Yes" have a different average opinion from the average economist and from economists who say "No".
We might not have this data in hand, but it's the algorithm you're approximating when you notice that a lot of smart-seeming people assign much higher than average probabilities to cryonics technology working.
Another little essay on MWI. tl;dr : Eliezer is wrong on the Internet! Won't somebody please think of the mind children?...
I have spent many years studying and thinking about interpretations of quantum theory. Eliezer's peculiar form of dogmatism about many worlds is a new twist. I have certainly encountered dogmatic many-worlds supporters before. What's exceptional is Eliezer's determination to make belief in many worlds a benchmark test of rationality in general. He's not just dogmatic about it as a question of physics, but now he even calls it a rationalist "slam-dunk", a thing which should be obvious to any sufficiently informed clear thinker, and which can be used to rank a person's rationality.
My position, I suppose, is that it is Eliezer who is insufficiently informed. He has always been a wavefunction realist - a believer in the existence of the wavefunction - and simply went from a belief in collapse of the wavefunction, to a belief in no collapse. If that was the only choice, he'd have a point. But it is far from being the only choice.
One thing I wonder about (when I adopt the perspective of trying to draw lessons regarding general rationality from this affair) is whether he ought to regard himself as culpable for this error, or whether ignorance is a valid excuse. Yesterday I was promoting "quantum causal histories" as an example of an alternative class of interpretation. Those are rather obscure papers. He's certainly not at fault for not having heard of them. Yet he surely should have heard of John Cramer's transactional interpretation, and there's no trace of it in his writings on this topic.
I suspect that another factor in his thinking is a belief in the minimalism of many-worlds. All you need is the wavefunction. You even get to remove something from the theory - the collapse postulate. But the complexities reenter - and the handwaving begins - when you try to find the worlds in the wavefunction. Naive onlookers to this discussion may think of a world as a point in configuration space. But this is not the usual notion of "world" in the technical literature on many worlds. Worlds are themselves represented by lesser wavefunctions: components of the total wavefunction, or tensor factors thereof. It is a chronic question in many-worlds theory as to which such components are the worlds, or whether one even needs to specify a particular algebraic breakdown of the universal wavefunction as the decomposition corresponding to reality. I don't even know what Eliezer's position on this debate is. Is a world a point in configuration space? Is it a blob of amplitude stretching across a small contiguous region of configuration space? What about a wavefunction component which stretches across most of configuration space, and has multiple peaks - is it legitimate or not to treat that as a world? Eliezer is impressed by Robin Hanson's mangled worlds proposal; should we take Robin's definition of worlds as the one to use, if we wish to understand his thought?
I don't object to many-worlds advocates having their theoretical disputes; certainly better that they have them, than that their concepts should remain fuzzy and undeveloped! But I find it very hard to justify this harsh advocacy of many-worlds as obviously superior when the theoretical details of the interpretation remain so confused. The confusion, the unfinished work, seems comparable to that still existing with respect to the zigzag interpretations like Cramer's. And since a zigzag interpretation only requires a single, basically classical space-time, and does away with the wavefunction entirely except as the sort of probability distribution appropriate to a situation in which causality runs backwards and forwards in time, it has its own claim to elegance and minimalism.
My own position is the anodyne one that Further Research Is Required, and that theoretical pluralism should be tolerated. I respect the rigor of Bohmian mechanics; I don't believe it is the truth, but working on it might lead to the truth, and the same goes for a number of other interpretations. I tilt towards single-world interpretations because I anticipate that in most completed many-worlds theories (many-worlds theories in which the confusions have truly been resolved, by an exact theoretical framework), you will be able to find self-contained histories, akin to Bohmian trajectories but perhaps metaphorically "thicker" in cross-section. And my ultimate message for many-worlds enthusiasts is that the apparent simplicity of many worlds is an illusion because of the theoretical work necessary to finish the job. You will end up either adding lots of extra structure, or compromising on objectivity and theoretical exactness (e.g. by being blase about what is and is not a "world").
Personally, I'm deferring making a decision about many-worlds until such time as I will have a need to make a decision about it (probably never), because it would take a large time investment.
EY's bringing it up repeatedly as a rationality test worries me a teeny bit. Not because I disagree with him about the particulars, but because bringing up one issue repeatedly into conversations where it seems tangential is a key indicator of schizophrenia, or at least impending crankism. I worry about that with extremely high-g people, particularly when they're ar... (read more)