I don't understand how many worlds can be a slam dunk for someone who doesn't understand all the math behind quantum physics.
If a significant number of people who do understand this math believe that many-worlds is wrong, then no matter how convincing I find your non-mathematical arguments in favor of many-worlds isn't it rational for me to still assign a significant probability to the possibility that many worlds isn't correct?
Doesn't physics all come down to math, meaning that people who can't follow the math should put vastly more weight on polls of experts than on their own imperfect understanding of the field?
Along with 99% of humanity my IQ isn't high enough for me to ever understand the math behind quantum physics
This may be a tangential point, but I need to say this somewhere: claims like this are quite likely false. (Notice how rarely they're accompanied by justification.)
Quantum mechanics is new (in the scheme of things). So, of course, we see right now that the only people who understand it are very smart people: the ones who first thought of it and their students and associates. But that doesn't mean that no one else can understand it; it just hasn't had time to trickle down into everyone's general education yet.
300 years ago, you could have replaced "quantum" by "classical" in that sentence, and it would have seemed reasonable: at that time, only a few dozen people in the world understood the differential and integral calculus. Yet now this kind of mathematics is taught regularly to hordes of IQ 110 college freshmen, and (I expect) is considered elementary and routine by a majority of LW readers. Taking an Outside View approach here, I don't see any reason not to expect that the same trend will continue into the future, with quantum mechanics eventually b...
You should look at the SAT math test to get an estimate of the percentage of Americans for which "linear algebra over complex vector spaces" can ever be simple.
Going back further, once upon a time literacy was an elite skill. Now we take it for granted, but how much do you really think our IQs have improved in the last couple thousand years?
A lot! Western IQ scores have improved by ~30 points since IQ tests were invented around a century ago. And literacy is probably part of a positive feedback loop that historically boosted IQ: increased literacy improves IQ, and higher IQ increases literacy. That feedback loop likely hasn't been going for two thousand years, but it's been going for at least two hundred years, which is more than enough time for a feedback loop to go nuts.
Still, though I suspect IQs have improved massively in the last couple thousand years, I definitely agree with your comment. I think the rise in average IQ over time doesn't mean we've gotten qualitatively smarter, more that our environment has - and one aspect of that is the trickle-down effect of mental tools like literacy, classical mechanics, and quantum mechanics.
C.f. above: You need an above-average IQ to learn calculus in spite of the American educational system. We have no idea what genetic IQ is required to learn calculus.
I've personally lowered the IQ needed to understand Bayes's Theorem by browsing online, and if I rewrote the page today I bet I could drop it another 10 points.
From what I've seen of the actual math, if you can understand the content of a typical Calc 3 course (which covers multivariable calculus), you can understand the math of quantum mechanics. If you can get an engineering degree (which is not an easy feat, but it's something an awful lot of people manage to do), you should be smart enough to do quantum mechanics calculations.
Another little essay on MWI. tl;dr : Eliezer is wrong on the Internet! Won't somebody please think of the mind children?...
I have spent many years studying and thinking about interpretations of quantum theory. Eliezer's peculiar form of dogmatism about many worlds is a new twist. I have certainly encountered dogmatic many-worlds supporters before. What's exceptional is Eliezer's determination to make belief in many worlds a benchmark test of rationality in general. He's not just dogmatic about it as a question of physics, but now he even calls it a rationalist "slam-dunk", a thing which should be obvious to any sufficiently informed clear thinker, and which can be used to rank a person's rationality.
My position, I suppose, is that it is Eliezer who is insufficiently informed. He has always been a wavefunction realist - a believer in the existence of the wavefunction - and simply went from a belief in collapse of the wavefunction, to a belief in no collapse. If that was the only choice, he'd have a point. But it is far from being the only choice.
One thing I wonder about (when I adopt the perspective of trying to draw lessons regarding general rationality from this affair) ...
But the complexities reenter - and the handwaving begins - when you try to find the worlds in the wavefunction. ... It is a chronic question in many-worlds theory as to which such components are the worlds, or whether one even needs to specify a particular algebraic breakdown of the universal wavefunction as the decomposition corresponding to reality.
Now, I'm no quantum expert, but this seems to me to be a criticism based entirely on the name; “It's called many-worlds, so where are the worlds?” Fine. I hereby rename the theory to “much-world”.
Take “The Conscious Sorites Paradox” (thanks to Zack_M_Davis for the link) and s/person/world/.
Some views are contrarian in society at large, but dominant views in particular subcultures. Libertarian views aren't dominant in economics, but they're (correct me if I'm wrong) dominant in the economics department at George Mason where Robin Hanson works. Does that make Robin a contrarian, or a conformist?
I bet that most 9/11 conspiracy theorists have a lot of friends who are also conspiracy theorists. Sometimes supporting one contrarian theory (the Jews were behind 9/11) is a way of aligning with a larger, locally-conformist narrative (the Jews are behind everything).
Next thing you know, someone will say Jews are behind the Singularity Institute... um... uh-oh.
Yes, if nearly all "contrarian" views are just conformity with local groups, then there will be no c-factor - just a lot of ideologies that don't correlate with each other. This was alluded to in the OP.
This could be true of the general population but not of academia/intelligentsia, in which case polling 10,000 respondents there might still work.
I've read that in the 19th century, there were many people who said that iron ships couldn't possibly float. If you take a few seconds to do the math, you can quickly verify that iron ships can float. That seems like a good slam-dunk to me. Is there a modern equivalent?
Some similar, not-quite-as-obvious former popular opinions:
(Interestingly, the much older "sailboats can never sail upwind" seems more plausible to me than any of these.)
Contemporary unpopular slam-dunk-yes views:
There are culture-specific slam-dunks. I noticed, while traveling in China, particularly during an episode when the US bombed a Chinese embassy and when discussing the Tienanmen Square massacre, that most of the Chinese people who spoke openly with me (just a few) simultaneously believe their government is corrupt and untrustworthy, yet believed everything it said about those incidents. Numerous Russians I've spoken to have a blindness reconciling their views ...
(Interestingly, the much older "sailboats can never sail upwind" seems more plausible to me than any of these.)
Wind-powered directly downwind faster than the wind vehicle!
Robin would've had to update pretty fast to update faster than I updated. I'm like, "Tao says it works? OK."
I don't really find it very counterintuitive. The different velocities of wind and ground are supplying free energy. Turns out you can grab a bunch of it and move faster than the wind? I don't see how that would violate thermodynamics or conservation of momentum. I haven't even checked the math; it just doesn't seem all that unlikely in the first place.
Hmm. I think you're right. Oops. You can sail downwind faster than the wind. I tried to write up a detailed proof of why it wouldn't work, and it worked.
Numerous Russians I've spoken to have a blindness reconciling their views on Stalin with their views on Putin (the same attributes that made Stalin bad make Putin good).
...what? I'm Russian, not much a fan of Putin, but this statement seems insane to me. Here's what made Stalin bad. Putin doesn't even begin to compare.
The only thing I'd predict from knowing someone believes in many worlds is that they like science fiction. This isn't because anything might be happening somewhere, it's because many worlds is a much more interesting universe.
I don't think that QM, P-Zombies and Many Worlds are good examples at all. Frankly, I tend to think that the use of nuance, the use of careful distinctions between proposed hypotheses rather than the endorsement of slogans is a very good sign. For precisely this reason, however, I thought that http://econlog.econlib.org/archives/2009/12/what_do_philoso.html was a fantastically useless survey. If you summarize a philosophical debate in a slogan and ask people for a 'yes/no' answer you should expect the best thinkers to be able to explain exactly what would cause people to endorse each side and how those causes establish or fail to establish correspondence to reality.
The problem with this, of course, is that it motivates fake nuance. The endless proliferation of fake nuance is one of the major products produced (and almost exclusively consumed) by academic philosophers.
I thought that http://econlog.econlib.org/archives/2009/12/what_do_philoso.html was a fantastically useless survey.
So did I, for most part. The best response to some of those questions would be "Sod off. The mistake is asking that question in the first place, and neither answer is meaningful. Reality just doesn't carve there."
Why would you expect someone who has a high correct contrarian factor in one area to have it in another?
Bad beliefs do seem to travel in packs (according to Penn and Teller, and Eliezer, anyhow). Lots of alien conspiracy nuts are government conspiracy nuts as well. That's not surprising, because bad beliefs are easy to pick up and they seem to be tribally maintained by the same tribe that maintains other bad beliefs.
But good beliefs? Really good ones? They're difficult. They take years. If you don't know of Less Wrong (or similar) as a source of good beliefs, you probably only have one set of good beliefs in your narrow area (like economics or quantum physics but not both). And you know what? You shouldn't be expected to have more, if that's the one set that you use to affect the world.
Barring only a few people with interdisciplinary interests, I would expect that the economists who are the best at predicting the stock market would answer “What's a many-worlds interpretation?” to Eliezer's question.
"Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by "outside indicators" - as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project"
I think the method that was taught in my family is better: become an expert on one or more...
If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.
Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.
(The first thing I'd probably try would be SVD to see if it isolates a "correctness factor", since it's simple and worked famously well on the Netflix dataset.)
I'd like to give people quizzes to identify cognitive biases, perform SVD (or factor analysis) on the results, and see if the first dimension matches up with "liberal / conservative".
PS - The technique referred to as SVD by the Netflix contestants is actually multiple linear regression. SVD uses principal component analysis, so that the first dimension is the dimension with g...
Clusters of opinion may be accidental, e.g. many lemmings follow Eliezer Yudkowsky who is correct on three topics and wrong on two. Or some other pundit. I think such accidental correlations will drown out whatever useful signal you were hoping to uncover by factor analysis. It's a fishy endeavor anyway, smells like determining truth by popular vote spiced up with nifty math. What if all smart people start using your algorithm? You could get some nasty herd effects...
There might be a certain kind of people with high-quality suggestions on what novel things to investigate. (I expect these people have more to offer than lists of beliefs.) We have effective g-factor tests that allow to reliably find smart people in general population, and to construct groups of especially smart people. This post suggests that there might be a similarly effective way to estimate people's rationality, a "cc-factor", the ability to find and adopt correct beliefs even if they go against conventional wisdom, not necessarily as a skil...
I don't think that disbelief in P-zombies belongs on this list. Or, it belongs on the list only in the sense in which Chalmers himself disbelieves in P-zombies. Chalmers doesn't think that P-zombies might actually exist in the real world. Rather, he thinks that P-zombies could have existed had the universe been governed by different laws. In other words, his "belief" in P-zombies is an artifact of how he assigns truth-values to counterfactuals.
I agree that he makes these truth-value assignments in a wrong way. But he doesn't really believe i...
A better method to anchor would be to use predictions about the future. How about finding 50contrarian predictions about what will happen in 2010 and use them as control for 50 other contrarian questions for which we will never have 100% sure answers?
Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is "These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive." (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return [...])
Eric Falkenstein wrote in 2016 that he converted to Christianity and doesn't believe in evolution via natural selection. This doesn't automatically mean that he was a cran
Hunch.com does this sort of data mining on their users, and they have lots of users. It seems it would be pretty easy for them to do this sort of analysis for questions raised in this post, much as they have with How Food Preferences Vary by Political Ideology and Mac vs PC People: Personality Traits & Aesthetic/Media Choices.
At the moment the comment i'm replying to is at -1 karma.
Now, even if PlaidX is on the wrong side of a "slam-dunk" issue here, i question whether it's right to downvote this considering that he's really just asking for an explanation of someone's reasoning.
That explanation seems a lot less Rube Goldbergian than a sinister conspiracy rigging a side building that wasn't hit by a plane with explosives. What on Earth would have been the point? Which of the conspiracy's goals will fail to be achieved if building 7 does not fall down? All you're doing here is learning a valuable lesson about the ability of conspiracy theorists to present evidence that looks around that convincing in favor of anything. Recalibrate your sensors for how much evidence something which looks "around that convincing" is.
What's the difference between a contrarian and a crackpot?
The degree of disrespect the speaker desires to convey to the labelled individual.
"I've never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained."
This would be a really fun dataset! See how many dimensions it reduces to and what the bases are.
Yes, collect data! You might even be able to make common cause with contrarians you disagree with in the collection of this data.
Short heuristic:
If you disagree with James Randi on many things about which he is outspoken, you're probably crazy. ;)
The community currently going under the name "skeptics" usually attacks easy targets that are already unpopular with the intelligentsia, like homeopathy. Let's see what Joe Nickell thinks about many-worlds first. Shermer and Penn & Teller have failed similar tests.
EDIT: Being a skeptic is just as easy (in fact, the opposite) of being a contrarian, and the test of whether a skeptic's cognition provides bayes-fuel is whether they fail to critique contrarian theories that are correct. This deserves a post which I might or might not have time to do.
I think Richard Dawkins passes the many-worlds test (8:36), at least if you allow for characteristic British understatement and a lack of training in physics.
I'm not saying I believe in the Mars effect, I'm saying that it looks to me like CSICOP found it more important to refute the enemy position than to behave cleanly throughout. Is that data worth defiance?
Jim Lippard reviewed the whole affair and concluded that CSICOP had transgressed; I found the review convincing.
The term "contrarian" is rather vague about who the disagreements are with.
There are many optical illusions, etc, where what most people think is wrong. The key thing is not simply disagreeing with a majority, but disagreeing with experts in the area - and not just any experts (lest we recognise priests as authorities on theology) - real experts.
A decade late to the party, I'd like to join those skeptical of EY's use of many-worlds as a slam-dunk test of contrarian correctness. Without going into the physics (for which I'm unqualified), I have to make the obvious general objection that it is sophomoric for an amateur in an intellectual field - even an extremely intelligent and knowledgeable one - to claim a better understanding than those who have spent years studying it professionally. It is of course possible for an amateur to have an insight professionals have missed, but very ra...
You're leaning heavily on the concept "amateur", which (a) doesn't distinguish "What's your level of knowledge and experience with X?" and "Is X your day job?", and (b) treats people as being generically "good" or "bad" at extremely broad and vague categories of proposition like "propositions about quantum physics" or "propositions about macroeconomics".
I think (b) is the main mistake you're making in the quantum physics case. Eliezer isn't claiming "I'm better at quantum physics than professionals". He's claiming that the specific assertion "reifying quantum amplitudes (in the absence of evidence against collapse/agnosticism/nonrealism) violates Ockham's Razor because it adds 'stuff' to the universe" is false, and that a lot of quantum physicists have misunderstood this because their training is in quantum physics, not in algorithmic information theory or formal epistemology.
I think (a) is the main mistake you're making in the economics case. Eliezer is basically claiming to understand macroeconomics better than key decisionmakers at the Bank of Japan, but based on the results, I think he was just correct about that. As far as I can tell, Eliezer is just really good at economic re
...Eliezer's econ case is based on reading Scott Sumner's blog, so it's not very informative that Sumner praises Eliezer (3 out of 4 endorsements you linked, the remaining one is anon).
bfinn was discounting Eliezer for being a non-economist, rather than discounting Sumner for being insufficiently mainstream; and bfinn was skeptical in particular that Eliezer understood NGDP targeting well enough to criticize the Bank of Japan. So Sumner seems unusually relevant here, and I'd expect him to pick up on more errors from someone talking at length about his area of specialization.
You should also take into account that Eliezer seems to have been right, as an “amateur” AI researcher, about AI alignment being a big deal.
A sizable shift has occurred because of him, which is different than your interpretation of my position. If you’re convincing Stuart Russell, who is convincing Turing award winners like Yoshua Bengio and Judea Pearl, then there was something that wasn’t considered.
I am somewhat surprised that Free Will, which was assigned as first exercise in reductionism, is not up there instead of MWI or P-Z; even if conclusions in these areas are as clear, they are further up the inferential ladder (unless that in itself is part of the "test", not sure why it would be)
Can I humbly suggest that a tool along the lines of the one proposed here:
http://lesswrong.com/lw/2rw/proposal_for_a_structured_agreement_tool/
might be useful for the purpose?
I happen to know a few guys, religious, that have maid the Many-Worlds God Argument. Since, all possible worlds exist, therefore it means that in some world God exists. Since, God is Omnipotent so he rules our world too.
Followup to: Contrarian Status Catch-22
Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11. What else can you infer about them? Are they more or less likely than average to believe in homeopathy?
I couldn't cite an experiment to verify it, but it seems likely that:
All sorts of obvious disclaimers can be included here. Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view. Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera. Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.
(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don't correlate with each other, then I'm wrong and no c-factor exists.)
Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics. What else can you infer about them?
Well, one possible reason for believing in the many-worlds interpretation is that, as a general rule of cognitive conduct, you investigated the issue and thought about it carefully; and you learned enough quantum mechanics and probability theory to understand why the no-worldeaters advocates call their theory the strictly simpler one; and you're reflective enough to understand how a deeper theory can undermine your brain's intuition of an apparently single world; and you listen to the physicists who mock many-worlds and correctly assess that these physicists are not to be trusted. Then you believe in many-worlds out of general causes that would operate in other cases - you probably have a high correct contrarian factor - and we can infer that you're more likely to be an atheist.
It's also possible that you thought many-worlds means "all the worlds I can imagine exist" and that you decided it'd be cool if there existed a world where Jesus is Batman, therefore many-worlds is true no matter what the average physicist says. In this case you're just believing for general contrarian reasons, and you're probably more likely to believe in homeopathy as well.
A lot of what we do around here can be thought of as distinguishing the correct contrarian cluster within the contrarian cluster. In fact, when you judge someone's rationality by opinions they post on the Internet - rather than observing their day-to-day decisions or life outcomes - what you're trying to judge is almost entirely cc-factor.
It seems indubitable that, measured in raw bytes, most of the world's correct knowledge is not contrarian correct knowledge, and most of the things that the majority believes (e.g. 2 + 2 = 4) are correct. You might therefore wonder whether it's really important to try to distinguish the Correct Contrarian Cluster in the first place - why not just stick to majoritarianism? The Correct Contrarian Cluster is just the place where the borders of knowledge are currently expanding - not just that, but merely the sections on the border where battles are taking place. Why not just be content with the beauty of settled science? Perhaps we're just trying to signal to our fellow nonconformists, rather than really being concerned with truth, says the little copy of Robin Hanson in my head.
My primary personality, however, responds as follows:
In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there's really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don't want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.
Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by "outside indicators" - as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project. Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is "These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive." (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return - note that I don't say anything like this about AI, so this is not a plea I have use for myself!) Mostly, I tend to rely on analyzing the actual arguments; meta should be spice, not meat.
However, failing analysis of actual arguments, another method would be to try and distinguish the Correct Contrarian Cluster by plain old-fashioned... clustering. In a sense, we do this in an ad-hoc way any time we trust someone who seems like a smart contrarian. But it would be possible to do it more formally - write down a big list of contrarian views (some of which we are genuinely uncertain about), poll ten thousand members of the intelligentsia, and look at the clusters. And within the Contrarian Cluster, we find a subcluster where...
...well, how do we look for the Correct Contrarian subcluster?
One obvious way is to start with some things that are slam-dunks, and use them as anchors. Very few things qualify as slam-dunks. Cryonics doesn't rise to that level, since it involves social guesses and values, not just physicalism. I can think of only three slam-dunks off the top of my head:
These aren't necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.
Of course there are also slam-dunks like:
But these probably aren't the right kind of controversy to fine-tune the location of the Correct Contrarian Cluster.
A major problem with the three slam-dunks I listed is that they all seem to have more in common with each other than any of them have with, say, dietary science. This is probably because of the logical, formal character which makes them slam dunks in the first place. By expanding the field somewhat, it would be possible to include slightly less slammed dunks, like:
But if we start expanding the list of anchors like this, we run into a much higher probability that one of our anchors is wrong.
So we conduct this massive poll, and we find out that if someone is an atheist and believes in many-worlds and does not believe in p-zombies, they are much more likely than the average contrarian to think that low-energy nuclear reactions (the modern name for cold fusion research) are real. (That is, among "average contrarians" who have opinions on both p-zombies and LENR in the first place!) If I saw this result I would indeed sit up and say, "Maybe I should look into that LENR stuff more deeply." I've never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.
There are much more clever things you could do with the dataset. If someone believes most things that atheistic many-worlder zombie-skeptics believe, but isn't a many-worlder, you probably want to know their opinion on infrequently considered topics. (The first thing I'd probably try would be SVD to see if it isolates a "correctness factor", since it's simple and worked famously well on the Netflix dataset.)
But there are also simpler things we could do using the same principle. Let's say we want to know whether the economy will recover, double-dip or crash. So we call up a thousand economists, ask each one "Do you have a strong opinion on whether the many-worlds interpretation is correct?", and see if the economists who have a strong opinion and answer "Yes" have a different average opinion from the average economist and from economists who say "No".
We might not have this data in hand, but it's the algorithm you're approximating when you notice that a lot of smart-seeming people assign much higher than average probabilities to cryonics technology working.