[Link] "It'll never work": a collection of failed predictions
http://www.lhup.edu/~dsimanek/neverwrk.htm
(cross-posted from Hacker News)
Is Atheism a failure to distinguish Near and Far?
The terms Near and Far are to be taken in the context of Robin Hanson's Near/Far articles.
I was reading a fairly convincing article linked from a comment here about how theistic beliefs are so scantly supported, when not outright contradictory, that it's a doubtful whether anyone truly holds them at all. Of course there is a whole battery of explanations around the self-deception, signalling and belief-in-belief cluster, but the question that got in my head was about the kinds of people that can or cannot profess to hold these beliefs.
A common thread in many a 'deconversion' story is that some inconsistency in a person's worldview comes to their attention, and they can't let go until they have undone the whole fabric of their belief system. But given that most people are happy living productive lives while simultaneously nominally carrying around massively conflicted worldviews, what is it that makes certain individuals not capable of this fairly common human feat?
So the hypothesis that I'm considering is that the people who came to atheism this way, are those who demand detailed consistency of their Far ideals. Alternatively, they could be those for who what is normally considered Far is actually Near, in other words those with an unusually high Buxton Index. Combining the two, perhaps for people with a high Buxton Index, Far simply evaporates, as it comes under the scope of things that are relevant to a person's planning. (Edsger W. Djikstra, when introducing the Buxton Index, says that "true christians" have a Buxton Index of infinity. I think that couldn't be more wrong. Perhaps it is the case for singularitarians though.)
The obvious reason to be suspicious of this idea is that it's very flattering for those that fall in this category, which includes myself. Rather than dithering about it, I'd rather expose it to the community and see if it seems to have legs in the eyes of others.
Many of us *are* hit with a baseball once a month.
Watching the video of Eliezer's Singularity Summit 2010 talk, I thought once more about the 'baseball' argument. Here's a text version from How to Seem (and Be) Deep:
[...] given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.
And then it dawned on me. Roughly half of human kind, women, are inflicted with a painful experience about once a month for a large period of their lives.
So, if the hypothesis was correct, we would expect to have deep-sounding memes about why this was a good thing floating around. Not one to disappoint, the internet has indeed produced at least two such lists, linked here for your reading pleasure. However, neither of these lists claim that the benefits outweigh the costs, nor do they make any deep-sounding arguments about why this is in fact a good thing overall. Whether or not they are supported by the evidence, the benefits mentioned are relatively practical. What's more, you don't hear these going around a lot (as far as I know, which, admittedly, is not very far).
So why aren't these memes philosophised about? Perhaps the ick factor? Maybe the fact that having the other half of the population going around and having perfectly normal lives without any obvious drawbacks acts as a sanity test?
In any case, since this is a counter-argument that may eventually get raised, and since I didn't want to suppress it in favour of a soldier fighting on our side, I thought I'd type this up and feed it to the LessWrong hivemind for better or worse.
London Meetup on 2011/1/2
On Sunday, January 2nd 2011 there will be a meetup the London area. As with previous meetups, the venue is Shakespeare's Head. The meeting will start at 14:00.
In order to keep us organised for 2011, I'm putting together a mailing list for LWers around the London area. If you'd like to be added to the list, please send me your e-mail address via private message.
An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem
Common Sense Atheism has recently had a string of fantastic introductory LessWrong related material. First easing its audience into the singularity, then summarising the sequences, yesterday affirming that Death is a Problem to be Solved, and finally today by presenting An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem.
From the article:
Eliezer’s explanation of this hugely important law of probability is probably the best one on the internet, but I fear it may still be too fast-moving for those who haven’t needed to do even algeba since high school. Eliezer calls it “excruciatingly gentle,” but he must be measuring “gentle” on a scale for people who were reading Feynman at age 9 and doing calculus at age 13 like him.
So, I decided to write an even gentler introduction to Bayes’ Theorem. One that is gentle for normal people.
It may be interesting if you want to do a review of Bayes' Theorem from a different perspective, or offer some introductory material for others. From a wider viewpoint, it's great to see a popular blog joining our cause for raising the sanity waterline.
Reading Level of Less Wrong
Here's something to pick our collective spirits up:
According to Google's infallible algorithms, 20% of the content on LessWrong.com falls within the 'Advanced' reading level. For comparison, another well-known bastion of intelligence on the internets, Hacker News, only has 4% of it's content in that category.
Strangely, inserting a space before the name of the site in the query tends to reduce the amount of content that falls in the highest bucket, but I am told that highly trained Google engineers are interrogating the bug in a dimly lit room as we speak, and expect it to crack soon.
Calling LW Londoners
It seems we haven't done any London meetups in a while. Is anyone up for arranging something within January?
Fine-Tuned Mind Projection
The Fine-Tuning Argument (henceforth FTA) is the pet argument of many a religious apologist, allowing them as it does to build support for their theistic thesis on the findings of cosmology. The basic premise is this: The laws of nature appear to contain constants that if changed slightly would yield universes inhospitable to life. Even though a lot can be said about this premise, Let's assume it true for the purposes of this article.
Luke Muehlhauser over at Common Sense Atheism recently wrote an article pointing out what I think is a central flaw of the FTA. To summarise, he notes that there are multitudes of propositions that are true for this universe and would not be true in a different universe. For instance galaxies, or, Luke's tongue-in-cheek example: iPads. If you accept that the universe is fine-tuned for life, you also have to accept that it's fine-tuned for galaxies, and iPads, given that some changes in the fine-tuned constants would not produce galaxies, and certainly not iPads.
So the question posed to defenders of the FTA is 'why life'? Why focus on this particular fact? What is it that sets life apart from all the other propositions true about our universe but not other the other possible universes? The usual answer is that life stands out, being valuable in ways that galaxies, iPads, and all the other true propositions are not. It seems that this is an unstated premise of the FTA. But where does that premise come from? Physics gives us no instrument to measure value, so how did this concept get in what was supposed to be a cosmology-based argument?
I present the FTA here as an argument that while seemingly complex, simply evaporates in light of the Mind Projection Fallacy. Knowing that humans tend to confuse 'I see X as valuable' with 'x is valuable', the provenance of the hidden premise 'life is valuable' is laid bare, as is the identity of the agent who is doing the valuing, and it is us. With the mystery solved, explaining why humans find life valuable does not require us to go to the extreme lengths of introducing a non-naturalistic cause for the universe.
Without any support for life being special in some way, the FTA devolves into a straightforward case of Texas Sharpshooter Fallacy: There exists life, our god would have wanted to create life, therefore our god is real! Not quite as compelling.
Startups
There seems to be a non-negligible deal of overlap between this community and Hacker News, both in terms of material and members. For those not aware of HN, it's a news aggregator for people interested in startups, technology, and other intellectually interesting topics, with a reputation for high-quality material and discourse.
While rationality and LessWrong gets its fair share of attention over at HN, I haven't heard of much discussion about startups over here. Off-line, I've heard a claim that in terms of contribution to existential risk prevention charities, startups are suboptimal when compared to jobs in finance, but not much else other than that. I find this odd, as many of the contributors in this site seem to be prime founder material, and rationality should really be of use when working in a high-stakes ever-changing environment.
My intention with this post is simply to kickstart a discussion around startups and gauge the attitudes of fellow LessWrongers. Does anyone (else) aspire to becoming a startup founder in the next few years? Do you believe startup founding to be a viable means of contributing to groups existential risk prevention?
Rationality is Not an Attractive Tribe
Summary: I wonder how attractive rationality as a tribe and worldview is to the average person, when the competition is not constrained by verifiability or consistency and is therefore able optimize around offering imaginary status superstimuli to its adherents.
Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'
— Isaac Asimov
I've long been puzzled by the capability of people to reject obvious conclusions and opt for convoluted arguments that boil down to logical fallacies when it comes to defending a belief they have a stake in. When someone resists doing the math, despite an obvious capability to do so in other similar cases, we are right to suspect external factors at play. A framework that seems congruent with the evolutionary history of our species is that of beliefs as signals of loyalty to a tribe. Such a framework would explain the rejection of evolution and other scientific theories by large swathes of the world's population, especially religious population, despite access to a flood of evidence in support.
I will leave support of the tribal signalling framework to others, and examine the consequences for popular support of rationality and science if indeed such a framework successfully approximates reality. The best way I can do that is by examining one popular alternative: The Christian religion which I am most familiar with, in particular its evangelical protestant branch. I am fairly confident that this narrative can be ported to other branches of Christianity and abrahamic faiths fairly easily and the equivalents for other large religions can be constructed with some extra effort.
"Blessed are the meek, for they will inherit the earth"
— The Bible (New International Version), Matthew 5:5
What is the narrative that an evangelical Christian buys into regarding their own status? They belong to the 'chosen people', worshipping a god that loves them, personally, created them with special care, and has a plan for their individual lives. They are taking part in a battle with absolute evil, that represents everything disgusting and despicable, which is manifested in the various difficulties they face in their lives. The end-game however is known. The believers, once their faith is tested in this world, are destined for an eternity of bliss with their brethren in the presence of their god, while the enemies will be thrown in the eternal fire for eternal torment. In this narrative, the disadvantaged in this life are very important. There exist propositions which can be held with absolute certainty. This presents a black-white divide in which moral judgements are easy, as long as corner cases can be swept under the rug. Each and every person, regardless of their social standing or capability, can be of utmost importance. Everyone can potentially save a soul for all eternity! In fact, the gospels place emphasis on the humble and the meek:
So those who are last now will be first then, and those who are first will be last.
— The Bible (New International Version), Matthew 20:16
What is the rational alternative to this battle-hardened, well-optimized worldview? That there is no grand narrative. If such a narrative exists, (pursuit of truth, combating existential risk, <insert yours here>), the stars of this narrative are those blessed with intelligence and education such that they can digest the background material and make these pursuits on the cutting edge. It turns out, your enemies are not innately evil, either. You may have just misunderstood each other. You have to constantly struggle to fight your own biases, to no certain outcome. In fact, we are to hold no proposition with 100% certainty. On the plus side, science and rationality offers, or at least aspires to offer, a consistent worldview free from cognitive dissonance for those that can detect the alternative's contradictions. On the status side, for those of high intelligence, it puts them at the top of the hierarchy, being in the line of the great heroes of thought that have gone before, uncovering all the knowledge we have so far. But this is not hard to perceive as elitism, especially since the barriers to entry are difficult, if not impossible, to overcome for the vast majority of humans. Rationality may have an edge if it can be shown to improve an individual's life prospects. I am not aware of such research, especially one that untangles rationality from intelligence. Perhaps the most successful example, Pick-up artists, are out of limits for this community because their terminals are deemed offensive. While we define rationality as the way to win, the win that we focus on in this community is a collective one, therefore unlikely to confer an individual with high status in the meantime if this individual does not belong to the intellectually gifted few.
So what does rationality have to offer to the common man to gain their support? The role of hard-working donor, whose contribution is in a replaceable commodity, e.g. money? The role of passive consumer of scientific products and documentaries? It seems to me that in the marketplace of worldview-tribes, rationality and science do not present themselves an attractive option for large swathes of the earth's population, and why would they? They were never developed as such. To make things worse, the alternatives have millennia of cultural evolution to better fit their targets, unconstrained by mundane burdens such as verifiability and consistency. I can perfectly see the attraction of the 'rational irrationality' point of view where someone would compartmentalises rationality into result-sensitive 'get things done' areas, while choosing to affirm unverifiable and/or incoherent propositions that nevertheless superstimulate one's feel-good status receptors.
I see two routes here: The one is that we decide that popular support is not necessary. We focus our recruiting efforts on the upper strata of intelligence and influence. If it's a narrative that they need, we can't help them. We're in the business of providing raw truth. Humans are barely on the brink of general intelligence, anyway. A recent post claimed that an IQ of 130+ was practically a prerequisite for appreciating and comprehending the sequences. The truths are hard to grasp and inconvenient, but ultimately it doesn't matter if a narrative can be developed for the common person. They can keep believing in creationism, and we'll save humanity for them anyway.
On the other hand, just because the scientific/rational worldview has not been fitted to the common man, it doesn't mean it can't be done. (But there is no guarantee that it can.) The alternative is to explore the open avenues that may lead to a more palatable narrative, including popularising many of the rationality themes that are articulated in this community. People show interest when I speak to them about cognitive biases but I have no accessible resources to give them that would start from there as a beachhead and progress into other more esoteric topics. And I don't find it incredible that rationality could provably aid in better individual outcomes, we just need solid research around the proposition. (The effects of various valleys of bad rationality or shifts in terminals due to rationality exposure may complicate that).
I am not taking a position on which course of action is superior, or that these are the only alternatives. But it does seem to me that, if my reasoning and assumptions are correct, we have to make up our mind on what exactly it is we want to do as the Less Wrong community.
Edit/Note: I initially planned for this to be posted as a normal article, but seeing how the voting is... equal in both directions, but that there is a lively discussion developing, I think this article is just fine in the discussion section.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)