Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.
By the same token, a post about Givewell's top charities and how they compare to existential-risk mitigation is a post about optimal philanthropy, while a post about scope insensitivity and hedonic returns vs. marginal returns is a post about rational philanthropy, because the first is discussing object-level outcomes while the second is discussing cognitive algorithms. And either way, if you can have a post title that doesn't include the word "rational", it's probably a good idea because the word gets a little less powerful every time it's used.
Of course, it's still a good idea to include concrete examples when talking about general cognitive algorithms. A good writer won't discuss rational philanthropy without including some discussion of particular charities to illustrate the point. In general, the concrete-abstract writing pattern says that your opening paragraph should be a concrete example of a nonoptimal charity, and only afterward should you generalize to make the abstract point. (That's why the main post opened with the Ayn Rand anecdote.)
And I'm not saying that we should never have posts about Optimal Dieting on LessWrong. What good is all that rationality if it never leads us to anything optimal?
Nonetheless, the second Go stone placed to block the Objectivist Failure Mode is trying to define ourselves as a community around the cognitive algorithms; and trying to avoid membership tests (especially implicit de facto tests) that aren't about rational process, but just about some particular thing that a lot of us think is optimal.
Like, say, paleo-inspired diets.
Or having to love particular classical music composers, or hate dubstep, or something. (Does anyone know any good dubstep mixes of classical music, by the way?)
Admittedly, a lot of the utility in practice from any community like this one, can and should come from sharing lifehacks. If you go around teaching people methods that they can allegedly use to distinguish good strange ideas from bad strange ideas, and there's some combination of successfully teaching Cognitive Art: Resist Conformity with the less lofty enhancer We Now Have Enough People Physically Present That You Don't Feel Nonconformist, that community will inevitably propagate what they believe to be good new ideas that haven't been mass-adopted by the general population.
When I saw that Patri Friedman was wearing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wearing Vibrams, I got a pair myself to see if they'd work. They didn't work for me, which thanks to Cognitive Art: Say Oops I was able to admit without much fuss; and so I put my athletic shoes back on again. Paleo-inspired diets haven't done anything discernible for me, but have helped many other people in the community. Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar. Seth Roberts's "Shangri-La diet", which was propagating through econblogs, led me to lose twenty pounds that I've mostly kept off, and then it mysteriously stopped working...
De facto, I have gotten a noticeable amount of mileage out of imitating things I've seen other rationalists do. In principle, this will work better than reading a lifehacking blog to whatever extent rationalist opinion leaders are better able to filter lifehacks - discern better and worse experimental evidence, avoid affective death spirals around things that sound cool, and give up faster when things don't work. In practice, I myself haven't gone particularly far into the mainstream lifehacking community, so I don't know how much of an advantage, if any, we've got (so far). My suspicion is that on average lifehackers should know more cool things than we do (by virtue of having invested more time and practice), and have more obviously bad things mixed in (due to only average levels of Cognitive Art: Resist Nonsense).
But strange-to-the-mainstream yet oddly-effective ideas propagating through the community is something that happens if everything goes right. The danger of these things looking weird... is one that I think we just have to bite the bullet on, though opinions on this subject vary between myself and other community leaders.
So a lot of real-world mileage in practice is likely to come out of us imitating each other...
And yet nonetheless, I think it worth naming and resisting that dark temptation to think that somebody can't be a real community member if they aren't eating beef livers and supplementing potassium, or if they believe in a collapse interpretation of QM, etcetera. If a newcomer also doesn't show any particular, noticeable interest in the algorithms and the process, then sure, don't feed the trolls. It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus.
Applied rationality counts for something, indeed; rationality that isn't applied might as well not exist. And if somebody believes in something really wacky, like Mormonism or that personal identity follows individual particles, you'd expect to eventually find some flaw in reasoning - a departure from the rules - if you trace back their reasoning far enough. But there's a genuine and open question as to how much you should really assume - how much would be actually true to assume - about the general reasoning deficits of somebody who says they're Mormon, but who can solve Bayesian problems on a blackboard and explain what Governor Earl Warren was doing wrong and analyzes the Amanda Knox case correctly. Robert Aumann (Nobel laureate Bayesian guy) is a believing Orthodox Jew, after all.
But the deeper danger isn't that of mistakenly excluding someone who's fairly good at a bunch of cognitive algorithms and still has some blind spots.
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
And then a purely metaphorical Ayn Rand starts kicking people out because they like suboptimal music. A sense of you-must-do-X-to-belong is also a kind of Authority.
Not all Authority is bad - probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage. But good Authority should generally be modular; having a sweeping cultural sense of lots and lots of mandatory things is also a failure mode. This is what I think of as the core Objectivist Failure Mode - why the heck is Ayn Rand talking about music?
So let's all please be conservative about invoking the word 'rational', and try not to use it except when we're talking about cognitive algorithms and thinking techniques. And in general and as a reminder, let's continue exerting some pressure to adjust our intuitions about belonging-to-LW-ness in the direction of (a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "The Fabric of Real Things"
Previous post: "Rationality: Appreciating Cognitive Algorithms"
Okay, read Taubes' article in the New York Times, "What if it's all been a big fat lie?". That's ten years old, there has been research published since then, but nothing to change the basic conclusions.
I suggest reading it before the rest here!
The organizations are not "scientific." They are largely political creatures, and how they are funded can be an issue. If cholesterol is not the problem, what happens to the statin drug market? But I don't know that recommendations are driven by funding.
Taubes is a thorough science writer, a skeptic, and it is indeed science that he's interested in. He is not selling a diet.
Taubes covers the history of diet recommendations in the U.S. It's shocking.
Something brief: In 1957, the American Heart Association opposed Ansel Keys (the author of the epidemiological study that got the whole fat=bad thing going), with a 15-page report, saying there was no evidence for the fat/heart disease hypothesis. Less than four years later, a 2-page report from the AHA totally reversed that, and, according to Taubes, that report included a half-page of "recent scientific references on dietary fat and atherosclerosis," many of which contradicted the conclusions of the report, which recommended reducing the risk of heart disease by reducing dietary fat..
What happened? Did the science change that quickly? Read Taubes! (i.e, read the book, "Good Calories, Bad Calories." Taubes also has a recent book, less technical, more popular, I think, but I haven't read it.)
I could point to studies; the Atkins diet in particular has been studied independently, and it improves cardiac risk factors, it does not make them worse. Yet it's a high-fat diet. So what is the risk?
Yes. I'm arguing against a commonly-recommended diet. I'm suggesting that relying on these agencies and their recommendations, without understanding the science, is very dangerous.
Taube had written a book about salt, and when he was doing the research, he noticed nutritional "expert" after "expert" who had no clue how science works, who used extremely poor reasoning, conclusion-driven. And he noticed the same when he started working on fat.
When I started reading in the field, out of personal necessity, I could see it myself, really poor "science" being commonly asserted as if it were simple fact.
Such as "a calorie is a calorie." I.e., it's said there is no difference between fat calories and carb calories, and the claim of Atkins that fat had a "metabolic advantage" was allegedly preposterous, this would supposedly violate the laws of thermodynamics.
However:
various foods take different amounts of energy to metabolize, and some calories are excreted.
food calories are not thermodynamic calories, and this is not merely the "kilocalorie" thing, they are modified according to metabolic factors estimated from studies that were done about a century ago, and that may not be accurate under various dietary conditions.
carb metabolism (burning glucose) runs the body in a different way, and has behavioral effects, compared to fat metabolism. Appetite shifts (fat suppresses appetite, generally).
There never was good evidence that saturated fats increased cardiovascular risk, that was speculation from the highly flawed Keys study. It was thought "well, to really know will take very expensive trials, we can't do that, so why not reduce fat? It can't hurt!"
But it could and probably did hurt. Lower fat in the diet, you almost certainly raise carbs, and quite possibly increase obesity, diabetes, heart disease, and there is an effect on cancer, apparently.
Bottom line, the officially-recommended diets have very little science behind them.
This really is not the place to debate the issue. Read the literature! Taubes is an excellent door into it, the book GCBC is about a fourth footnotes.
Or look at the Wikipedia article Saturated fat and cardiovascular disease controversy, (Do not trust Wikipedia articles to be neutral. They frequently are not. Use them to find other sources.)
It's tempting to sit back and trust the official organizations. It's a lot of work to actually read the evidence. However, is this important?
I thought it was, like, my life depends on it.
The AHA is a $600 million/year organization. If fat/heart disease hypothesis is as wrong as it appears to be, they may have cost Americans, in damage to health, a great deal more than that. Now, consider what we know about human organizations. When they get it spectacularly wrong, but before there is absolute proof, do they back up easily?
No. Their business is to be the experts, remember that $600 million per year.
This pattern-matches exactly to everything else conspiracy theory related I have ever read, and by that I mean it misinterprets the relative incentives. You speak of organizations that apparently face financial loss if they turn out to be wrong, but you provide no convincing reason for why they would lose funding if they revised their positions due to new evidence. You also don't mention the huge profits an organization would surely make if it provided compelling evidence for how to actually lower the risk of the largest cause of death in the United States... (read more)