How theism works
There's a reason we can all agree on theism as a good source of examples of irrationality.
Let's divide the factors that lead to memetic success into two classes: those based on corresponding to evidence, and those detached from evidence. If we imagine a two-dimensional scattergram of memes rated against these two criteria, we can define a frontier of maximum success, along which any idea can only gain in one criterion by losing on the other. This doesn't imply that evidential and non-evidential success are opposed in general; just that whatever shape memespace has, it will have a convex hull that can be drawn across this border.
Religion is what you get when you push totally for non-evidential memetic success. All ties to reality are essentially cut. As a result, all the other dials can be pushed up to 11. God is not just wise, nice, and powerful - he is all knowing, omnibenificent, and omnipotent. Heaven and Hell are not just pleasant and unpleasant places you can spend a long time in - they are the very best possible and the very worst possible experiences, and for all eternity. Religion doesn't just make people better; it is the sole source of morality. And so on; because all of these things happen "offstage", there's no contradictory evidence when you turn the dials up, so of course they'll end up on the highest settings.
This freedom is theism's defining characteristic. Even the most stupid pseudoscience is to some extent about "evidence": people wouldn't believe in it if they didn't think they had evidence for it, though we now understand the cognitive biases and other effects that lead them to think so. That's why there are no homeopathic cures for amputation.
I agree with other commentators that the drug war is the other real world idea that I would attack here without fear of contradiction, but I would still say that drug prohibition is a model of sanity compared to theism. Theism really is the maddest thing you can believe without being considered mad.
Footnote: This was originally a comment on The uniquely awful example of theism, but I was encouraged to make a top-level post from it. I should point out that there are issues with my dividing line between "evidence-based" and "not evidence-based", since you could argue that mathematics is not evidence-based and nor is the belief that evidence is a good way to learn about the world; however, it should be clear that neither of these has the freedom that religion has to make up whatever will make people most likely to spread the word.
The uniquely awful example of theism
When an LW contributor is in need of an example of something that (1) is plainly, uncontroversially (here on LW, at least) very wrong but (2) an otherwise reasonable person might get lured into believing by dint of inadequate epistemic hygiene, there seems to be only one example that everyone reaches for: belief in God. (Of course there are different sorts of god-belief, but I don't think that makes it count as more than one example.) Eliezer is particularly fond of this trope, but he's not alone.
How odd that there should be exactly one example. How convenient that there is one at all! How strange that there isn't more than one!
In the population at large (even the smarter parts of it) god-belief is sufficiently widespread that using it as a canonical example of irrationality would run the risk of annoying enough of your audience to be counterproductive. Not here, apparently. Perhaps we-here-on-LW are just better reasoners than everyone else ... but then, again, isn't it strange that there aren't a bunch of other popular beliefs that we've all seen through? In the realm of politics or economics, for instance, surely there ought to be some.
Also: it doesn't seem to me that I'm that a much better thinker than I was a few years ago when (alas) I was a theist; nor does it seem to me that everyone on LW is substantially better at thinking than I am; which makes it hard for me to believe that there's a certain level of rationality that almost everyone here has attained, and that makes theism vanishingly rare.
I offer the following uncomfortable conjecture: We all want to find (and advertise) things that our superior rationality has freed us from, or kept us free from. (Because the idea that Rationality Just Isn't That Great is disagreeable when one has invested time and/or effort and/or identity in rationality, and because we want to look impressive.) We observe our own atheism, and that everyone else here seems to be an atheist too, and not unnaturally we conclude that we've found such a thing. But in fact (I conjecture) LW is so full of atheists not only because atheism is more rational than theism (note for the avoidance of doubt: yes, I agree that atheism is more rational than theism, at least for people in our epistemic situation) but also because
Beware of Other-Optimizing
Previously in series: Mandatory Secret Identities
I've noticed a serious problem in which aspiring rationalists vastly overestimate their ability to optimize other people's lives. And I think I have some idea of how the problem arises.
You read nineteen different webpages advising you about personal improvement—productivity, dieting, saving money. And the writers all sound bright and enthusiastic about Their Method, they tell tales of how it worked for them and promise amazing results...
But most of the advice rings so false as to not even seem worth considering. So you sigh, mournfully pondering the wild, childish enthusiasm that people can seem to work up for just about anything, no matter how silly. Pieces of advice #4 and #15 sound interesting, and you try them, but... they don't... quite... well, it fails miserably. The advice was wrong, or you couldn't do it, and either way you're not any better off.
And then you read the twentieth piece of advice—or even more, you discover a twentieth method that wasn't in any of the pages—and STARS ABOVE IT ACTUALLY WORKS THIS TIME.
At long, long last you have discovered the real way, the right way, the way that actually works. And when someone else gets into the sort of trouble you used to have—well, this time you know how to help them. You can save them all the trouble of reading through nineteen useless pieces of advice and skip directly to the correct answer. As an aspiring rationalist you've already learned that most people don't listen, and you usually don't bother—but this person is a friend, someone you know, someone you trust and respect to listen.
And so you put a comradely hand on their shoulder, look them straight in the eyes, and tell them how to do it.
Building Communities vs. Being Rational
I've noticed a distinct trend lately in that I've been commenting less and less on posts as time goes by. I've been wondering if its just that the new car smell of lesswrong has been wearing off, or if it is something else.
Well, I think I've identified it. I just don't care for discussions about how to go about building communities. It may, in the long run, be beneficial to work out how to build communities of rationalists, but in the meantime I find these discussions are making this less and less a community I want to be a part of, and (if I am not unique) may be having the opposite effect that they intend.
Don't get me wrong. I am not saying these discussions are unimportant or are not germane to the building of this site. I am saying that if a new person comes here and reads the last posts, are they going to want to stay? For myself, I find I am willing to be part of a community of enthusiastic rationalists (which is why I started reading this blog in the first place), but I have NO interest in being part of a community that spends all its time debating on how to build the community.
Lately, to me, this place has seemed more of the latter and less of the former.
The Tragedy of the Anticommons
I assume that most of you are familiar with the concept of the Tragedy of the Commons. If you aren't, well, that was a Wikipedia link right there.
However, fewer are familiar with the Tragedy of the Anticommons, a term coined by Michael Heller. Where the Tragedy of the Commons is created by too little ownership, the Tragedy of the Anticommons is created by too much.
For instance, the classical solution to the TotC is to divide up the commons between the herders using it, giving each of them ownership for a particular part. This gives each owner an incentive to enforce its sustainability. But what would happen if the commons were divided up to thousands of miniature pieces, say one square inch each? In order to herd your cattle, you'd have to acquire permission from hundreds of different owners. Not only would this be a massive undertaking by itself, any one of them could say no, potentially ruining your entire attempt.
This isn't just a theoretical issue. In his book, Heller offers numerous examples, such as this one:
...gridlock prevents a promising treatment for Alzheimer's diseases being tested. The head of research at a "Big Pharma" drugmaker told me that his lab scientists developed the potential cure (call it Compound X) years ago, but biotech competitors blocked its development. ... the company developing Compound X needed to pay every owner of a patent relevant to its testing. Ignoring even one would invite an expensive and crippling lawsuit. Each patent holder viewed its own discovery as the crucial one and demanded a corresponding fee, until the demands exceeded the drug's expected profits. None of the patent owners would yield first. ...
This story does not have a happy ending. No valiant patent bundler came along. Because the head of research could not figure out how to pay off all the patent owners and still have a good chance of earning a profit, he shifted his priorities to less ambitious options. Funding went to spin-offs of existing drugs for which his firm already controlled the underlying patents. His lab reluctantly shelved Compound X even though he was certain the science was solid, the market huge, and the potential for easing human suffering beyond measure.
Information cascades
An information cascade is a problem in group rationality. Wikipedia has excellent introductions and links about the phenomenon, but here is a meta-ish example using likelihood ratios.
Suppose in some future version of this site, there are several well-known facts:
- All posts come in two kinds, high quality (insightful and relevant) and low quality (old ideas rehashed, long hypotheticals).
- There is a well-known prior 60% chance of anything being high quality, rather than low quality. (We're doing well!)
- Readers get a private signal, either "high" or "low", their personal judgement of quality, which is wrong 20% of the time.
- The number of up and down votes is displayed next to each post. (Note the difference from the present system, which only displays up minus down. This hypothesis makes the math easier.)
- Readers are competent in Bayesian statistics and strive to vote the true quality of the post.
Let's talk about how the very first reader would vote. If they judged the post high quality, then they would multiply the prior likelihood ratio (6:4) times the bayes factor for a high private signal (4:1), get (6*4:4*1) = (6:1) and vote the post up. If they judged the post low quality then they would instead multiply by the bayes factor for a low private signal (1:4), get (6*1:4*4) = (3:8) and vote the post down.
The ethic of hand-washing and community epistemic practice
by Steve Rayhawk and Anna Salamon. (Joint authorship; there's currently no way to notate that in the Reddit code base.)
Related to: Use the Native Architecture
When cholera moves through countries with poor drinking water sanitation, it apparently becomes more virulent. When it moves through countries that have clean drinking water (more exactly, countries that reliably keep fecal matter out of the drinking water), it becomes less virulent. The theory is that cholera faces a tradeoff between rapidly copying within its human host (so that it has more copies to spread) and keeping its host well enough to wander around infecting others. If person-to-person transmission is cholera’s only means of spreading, it will evolve to keep its host well enough to spread it. If it can instead spread through the drinking water (and thus spread even from hosts who are too ill to go out), it will evolve toward increased lethality. (Critics here.)
I’m stealing this line of thinking from my friend Jennifer Rodriguez-Mueller, but: I’m curious whether anyone’s gotten analogous results for the progress and mutation of ideas, among communities with different communication media and/or different habits for deciding which ideas to adopt and pass on. Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children? Do mass media such as radio, TV, newspapers, or printing presses decrease the functionality of the average person’s ideas, by allowing ideas to spread in a manner that is less dependent on their average host’s prestige and influence? (The intuition here is that prestige and influence might be positively correlated with the functionality of the host’s ideas, at least in some domains, while the contingencies determining whether an idea spreads through mass media instruments might have less to do with functionality.)
Extending this analogy -- most of us were taught as children to wash our hands. We were given the rationale, not only of keeping ourselves from getting sick, but also of making sure we don’t infect others. There’s an ethic of sanitariness that draws from the ethic of being good community members.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)