Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.
If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."
Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?
It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example.
Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!"
Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail.
But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them.
The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King.
This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument.
Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used.
One of my favorite genres in the prestige press is the Self-Refuting Article. These are articles that contain all the facts necessary to undermine the premise of the piece, but reporters, editors, and readers all conspire together in an act of collective stupidity to Not Get the Joke.
-Steve Sailer, here
But why do people conspire in an act of collective stupidity to Not Get The Joke? Before I proceed let me first admit that it is indeed hard to identify such situations with any reasonable level of certainty, its seems difficult to do, when "Not Getting the Joke" is good for our side its subjectively harder to spot (see mind killer). I'm not quite sure what to make of this, but it seems to me that three obvious things are going on here:
- The author and the audience share many of the same biases. Maybe they really don't get the joke
- bandwagon effect, even if the overlap in biases isn't that great people are lazy thinkers and prefer a given answer.
- The right answer happens to be enemy attire and might provoke accusations or suspicion from others in the in group.
Now, perhaps not so obviously, could it be people have some incentives to say and even believe or at least try to believe things that are obviously wrong even to people of their tribe (political/religious/ect. affilation)? Why would something like this arise? My mind at this point wandered to conspicuous consumption.
Conspicuous consumption is lavish spending on goods and services acquired mainly for the purpose of displaying income or wealth.
Could there be such a thing as conspicuous wrongness?
"Look how much I identify with our group, I'm even willing to buy even if it dosen't do us much good. If I wasn't so virtuous I could never believe something this silly."
But why would sticking to the script when its blatantly false to others in the tribe boost your status and self-esteem? Well, sticking to it when its blatantly obvious to most people dosen't cost you anything now does it? Sticking to it when its merely uncertain only costs you the esteem of the out group (worthless in most cases)?
Sticking to the script, when everyone knows the script is false, is a sign of either cleverness or innocence, and more than that it is a sign that that cleverness or innocence is perfectly aligned to the interests of the tribe.
So my question is, if there is anything in the sequences that already covers this or was my speculation faulty?*
*(only just started reading the sequences)
I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been? Has his interpretation of quantum physics predicted any subsequently-observed phenomena? Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?
Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions. Meanwhile, I know that Thomas Friedman is an idiot. Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman. But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.
Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening. He was wrong about nearly everything. So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.
Has Eliezer offered anything falsifiable, or put his reputation on the line in any way? "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.
I've become adept at navigating the bureaucracy of my public high school. I've dropped environmental science as an AP (because it was painfully slow and replete with busywork) and am now taking an "independent study" in government. I'm going to be using this mainly as a way to study environmental science at my own pace, but I also have to read and write some about standard political issues. the requirements of the independent study are pretty vague. In order to get approved, I've got to BS some reason why I should be granted an independent study. I'm obviously not going to speak plainly. I'll probably say something about my interests in seasteading, environmentalism, and education reform. What books do you recommend on the politics of these subjects given that it is the mindkiller? Also, the main focus is on environmentalism, not on education or seasteading. I've done a bit of research regarding seasteading, but there's not much that I know about
I was particularly interested in this point brought up in the seasteading book:
Let’s consider several different levels on which we could discuss politics:
· Policy. For example, a debate about whether to criminalize drug use, attempt to reduce the harm of use, or completely legalize it. What are the effects of each specific policy? Which does the most net good? Who is hurt, and who is helped?
· System. What types of policies does a specific political system tend to generate? For example, in a democracy, a special interest group can easily coordinate to influence legislation which benefits them, but costs everyone a little bit. If every consumer loses a dollar a year from a policy, it just isn’t worth anyone’s time to fight it. Hence we expect democracies to frequently produce policies which steal small amounts from many and give them to a few. And indeed, tariffs, farm subsidies, and bailouts, just to name a few, fit this model quite well. This type of argument is at a level of generality above any specific policy, and it can offer enormous insight at consistent errors made by current governments. But to fix those problems, we need to rise further yet.
· Meta-system. At the level we want, we think about the entire industry of government. What types of systems does it produce? How can it be changed to produce better systems (that is, systems which produce better policies)? What influences how well the governments of the world serve their citizens? How can we increase competition between governments? This level is the most abstract and the most complex, which can make it difficult to get a handle on, but if we can grasp that handle, it gives us the most leverage to change the world.
They also recommend a reading list:
Machinery of Freedom (David Friedman)
Game Theory and the Social Contract (Ken Binmore)
Mancur Olson - stuff
Myth of the Rational Voter (Bryan Caplan)
Economics In One Lesson (Henry Hazlitt) ?
In regards to environmentalism, I was thinking about focusing on the relationships between government funding for green businesses as green entrepreneurship is of interest to me. I'd probably have to talk about the Solyndra scandal at some point.
As a side note, if the requirements aren't too stringent and I can just write about whatever I feel like so long as it vaguely relates to politics (like in my independent study in psychology), I may just go meta and write about Americans Elect.
Edit: I do think that there is a difference between descriptive politics ( e.g.describing the workings of the EPA or a standard civics class) and and normative (woo liberatarians!). I'm more interested in descriptive politics.
- Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
- Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
- Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
- Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
- Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
- Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
- If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
- Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
- Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
- Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
- Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
- The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
- When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
- Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
- If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
- Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
- There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
- There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
- Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
- Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
If you know more Eliezer Yudkowsky facts, post them in the comments.
In a nondescript commercial park on the outskirts of Las Vegas, a large cryogenic stem cell storage facility is ready to accept your baby’s blood. Cord Blood America in Las Vegas is one of dozens of private cord blood banks in the United States that, for a fee, will store stem cell-rich blood taken from a newborn baby’s umbilical cord.
Over one hundred thousand families save or donate cord blood annually, in the hopes it will one day provide medical help to their child or someone else.
“My vision is within the next 10 years we’ll see organizations like this develop into cellular therapy labs,” said Dr. Geoffrey O’Neill, vice president of CorCell, the subsidiary company that runs Cord Blood America’s Las Vegas facility. It’s beginning to happen now in countries like China and Mexico, he says.
...Reality is different. Leukemia, bone marrow failure, immune deficiency, metabolic diseases and sickle cell anemia — the diseases cord blood is typically needed for — require transplants of healthy cells. The cord blood of a child with leukemia would also carry the disease.
“If you have the money, and you want to bank your child’s own cord blood, you’re essentially investing in one of two things,” said Dr. Joanne Kurtzberg, director of the Duke Pediatric Bone Marrow and Stem Cell Transplant Program. “One, the possibility that another child in your family will need that cord blood, and that it matches. Or two, that somewhere in the future there will be new developments and new uses for your child’s cord blood — say in regenerative medicine or cell therapy. But to date, none of those exist.” While a few rare diseases, such as multiple myeloma and lymphoma, use stem cells taken from a patient’s own body, the chances of a child having these are vanishingly small.
...Parents pay $2,075 for the kit, courier fees and one year’s storage. CorCell charges a $125 annual storage fee after that. Some insurance companies offer discounts. CorCell has been in business for six years and stores about 30,000 samples; Geoffrey O’Neill, the vice president, says he recalls seven or eight samples being pulled for use. A CorCell customer service representative later estimated this at 25 to 30 samples. [30,000 / <30 = <1 in 1000 = <0.1%]
Inside the Cord Blood America facility, quotes painted on the walls are the first thing to capture the visitors’ attention: “The entire history of science is a progression of exploded fallacies,” proclaims Ayn Rand on one wall. “Do or do not, there is no try,” says Yoda on another. On another wall, Micky Rooney offers some advice: “You always pass failure on the way to success.”
Marginal Revolution linked a post at Genomes Unzipped, "Size matters, and other lessons from medical genetics", with the interesting centerpiece graph:
This is from pg 3 of an Ioannidis 2001 et al article (who else?) on what is called a funnel plot: each line represents a series of studies about some particularly hot gene-disease correlations, plotted where Y = the odds ratio (measure of effect size; all results are 'statistically significant', of course) and X = the sample size. The 1 line is the null hypothesis, here. You will notice something dramatic: as we move along the X-axis and sample sizes increase, everything begins to converge on 1:
Readers familiar with the history of medical association studies will be unsurprised by what happened over the next few years: initial excitement (this same polymorphism was associated with diabetes! And longevity!) was followed by inconclusive replication studies and, ultimately, disappointment. In 2000, 8 years after the initial report, a large study involving over 5,000 cases and controls found absolutely no detectable effect of the ACE polymorphism on heart attack risk. In the meantime, the same polymorphism had turned up in dozens of other association studies for a wide range of traits ranging from obstetric cholestasis to meningococcal disease in children, virtually none of which have ever been convincingly replicated.
While psychology wonks have been going on for years about the statistical rigor and calibration of the Big Five, most people have just carried on using the Myers-Briggs type indicator (MBTI), which may not be statistical or scientific but is able to categorize people without insulting them.
A serious critique of the MBTI is the Myers-Briggs entropy distribution paradox (or, "Why are there 16 personality types when everyone I know is an INTJ?") A new personality test which has been gaining ground recently, the MLPTI, does not break up the INTJ into multiple categories; but does reduce the number of bothersome non-INTJ personality types and thus ameliorates the entropy paradox. For those not yet familiar with it, here is a rough translation between MLPTI and MBTI types.
|TS||conscientious, introverted, self-conscious
|RD||impulsive, activity-oriented, high stimulation threshold
|AJ||pragmatic, disciplined, outcome-oriented
|FS||introverted, empathetic, anxious||ISFJ, INFJ
|R||extroverted, creative, status-seeking
The loss of half of the MBTI categories is not a serious problem, as demonstrated by the fact that you can't even name the ones that were left out without going back and looking. Seriously, when was the last time you met an ENTP?
The application is obvious if you're computing an answer for your personal use. But how do you apply it when voting?
Political debates are tug-of-wars. Say a bill is being voted on to introduce a 7-day waiting period for handguns. You might think that you should vote on the merits of a 7-day waiting period. This isn't what we usually do. Instead, we've chosen our side on the larger issue (gun control: for or against) ahead of time; and we vote whichever way is pulling in our direction.
To use the tug-of-war analogy: There's a knot tied in the middle of the rope, and you have some line in the sand where you believe the knot should end up. But you don't stop pulling when the knot reaches that point; you keep pulling, because the other team is still pulling. So, if you're anti-gun-control, you vote against the 7-day waiting period, even if you think it would be a good idea; because passing it would move the knot back towards the other side of your line.
Tug-of-war voting makes intuitive sense if you believe that an irrational extremist is usually more politically effective than a reasonable person is. (It sounds plausible to me.) If you've watched a debate long enough to see that the "knot" does a bit of a random walk around some equilibrium that's on the other side of your line, it can make sense to vote this way.
How do you apply Aumann's theorem to tug-of-war voting?
I think the answer is that you try to identify which side has more idiots, and vote on the other side.
People who grow up believing certain things,
even if they later stop believing them,
may not quite realize how the beliefs sound to outsiders...
(SCENE: A small cottage in Nazareth.)
Joseph: Mary, my dearest fiancée, there's something I've been meaning to talk to you about.
(Mary's shoulders slump. Slowly, as if under a heavy burden, she turns around to face Joseph.)
Joseph: You seem to be getting fat around the waistline, and throwing up in the morning, and, er, not getting any periods. Which is odd, because it's sort of like -
Mary: Yes! I'm pregnant! All right? I'm PREGNANT!
Joseph: How is that possible?
(Mary's shoulders slump further.) Mary: How do you think?
Joseph: I don't know, that's why I'm asking you. I mean, you're still a virgin, right?
(Mary looks up cautiously, and sees Joseph's face looking blankly puzzled.)
Mary: God did it.
Joseph: You had sex with -
Mary: No! Haha. Of course not. I mean, God just snapped his fingers and did one of those miracle things and made me pregnant.
Joseph: God made you pregnant.
Mary: (Starts to sweat.) Yes.
Joseph: Mary, that is just so... completely...
(Mary's eyes squeeze shut.)
View more: Next