Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Rationality is pretty great. Just not quite as great as everyone here seems to think it is.
The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...
The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.
-Robin Hanson, "Who Loves Truth Most?"
A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."
I saw this, and commented:
<puts rubber Robin Hanson mask on>
What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.
</rubber Robin Hanson mask>
In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."
This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.
This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.
Or, consider statements like:
- Some truths don't matter much.
- People often have legitimate reasons for not wanting others to have certain truths.
- The value of truth often has to be weighed against other goals.
Do these statements sound heretical to you? But what about:
- Information can be perfectly accurate and also worthless.
- People often have legitimate reasons for not wanting other people to gain access to their private information.
- A desire for more information often has to be weighed against other goals.
I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.
There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.
Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.
Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?
Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information.
"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.
Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.
The title of this post isn't a typo—its purpose is to ask how we can effectively do fundraising and movement-building for the effective altruism movement. This is an important question, because the return on these activities is potentially very high. As Robert Wiblin wrote on the topic of fundraising over a year ago:
GiveWell’s charity recommendations – currently Against Malaria Foundation, GiveDirectly and the Schistosomiasis Control Initiative – are generally regarded as the most reliable in their field. I imagine many readers here donate to these charities. This makes it all the more surprising that it should be pretty easy to start a charity more effective than any of them.
All you would need to do is found an organisation that fundraises for whoever GiveWell recommends, and raises more than a dollar with each dollar it receives. Is this hard? Probably not. As a general rule, a dollar spent on fundraising seems to raise at least several dollars.
Similarly, a more recent post at the 80,000 Hours blog asked "What cause is most effective?" and ended up concluding that "promoting effective altruism" was tied with "prioritization research" for the currently most effective cause. According to 80,000 Hours:
Promoting effective altruism is effective because it’s a flexible multiplier on the next most high-priority cause. It’s important because we expect the most high-priority areas to change a great deal, so it’s good to build up general capabilities to take the best opportunities as they are discovered. Moreover, in the recent past, investing in promoting effective altruism has resulted in significantly more resources being invested in the most high-priority areas, than investing in them directly. For instance, for every US$1 invested in GiveWell and Giving What We Can, more than $7 have been moved to high-priority interventions.
However, there are a number of questions to ask about this: for example, can we trust 80,000 Hours' estimates of the multiplier on giving to Give Well and GWWC? Might other organizations (such as the Centre for Effective Altruism, which is behind 80,000 Hours) be more effective at movement-building?
One interesting question is whether, from a movement-building perspective, it might make sense to (1) donate to an organization that both does movement building / cause-prioritization as well as making grants to object-level useful things (as GiveWell does) or (2) split your donation between an organization that does movement building and an organization that does object-level useful things. The rationale for this, particularly (2), is that donating exclusively to movement-building might not be the best thing for movement building, for a number of reasons:
- Donating exclusively to organizations focused on movement-building might hamper your ability to evangelize for effective altruism—people would quite justifiably be suspicious of an effective altruism movement that was too focused on movement-building.
- Similarly, from the point of view of the movement as a whole, people's justifiable suspicions of an EA movement too focused on movement building might lead to such a movement growing more slowly than an EA movement that was less focused on movement building.
- On why those suspicions on 1 and 2 are justified: even if an EA movement that was very focused on movement-building grew faster than one less focused on movement building, it could easily grow into the wrong kind of movement—one only good at self-promotion, not doing object-level useful things.
- Concrete successes by EA-backed charities may itself be very valuable for helping build the EA movement.
Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.
Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.
Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.
In a recent discussion of steelmanning, I observed that few people around here seem interested in steelmanning young earth creationism. In the resulting subthread, someone suggested Yvain's steelman of the Time Cube as illustrating what a steelman of of young earth creationism might look like. But steelmanning the Time Cube may be too much of a stretch, it's a little too incoherent to steelman effectively without changing it into something else entirely. In contrast, once I thought about it it wasn't hard to come up with some ways to steelman young earth creationism in a way that was very much in keeping with the spirit of real young earth creationist writings.
The way to do it, I think, is to approach it from a philosophy of science angle. For example, here's a quote from an article titled "Creation: 'Where's the Proof?'" on Answers in Genesis, a website run by young earth creationist Ken Ham (who recently debated science guy Bill Nye):
Creationists and evolutionists, Christians and non-Christians all have the same evidence—the same facts. Think about it: we all have the same earth, the same fossil layers, the same animals and plants, the same stars—the facts are all the same.
The difference is in the way we all interpret the facts. And why do we interpret facts differently? Because we start with different presuppositions. These are things that are assumed to be true, without being able to prove them. These then become the basis for other conclusions. All reasoning is based on presuppositions (also called axioms). This becomes especially relevant when dealing with past events.
You find a lot of this kind of thing on Answers in Genesis. For example, they're willing to concede that on certain assumptions, radiometric dating is a strong argument that the earth is a lot more than 10,000 years old, but they deny that they need to accept those assumptions.
I honestly don't think it's much of a stretch to steelman this into something that would look a lot like some of the things the philosophers of science I studied in graduate school said. I started writing a long post spelling this out, but then I started worrying I was going too far in playing devil's advocacy for creationism (even for an exercise in exploring the weaknesses of steelmanning). So instead, I'll just mention some places too look for material in such a project: The Duhem-Quine thesis, confirmation holism, underdetermination of scientific theory. Fun fact: Pierre Duhem regarded the existence of atoms as a metaphysical question that could not be settled by experiment, and this has not stopped him from being regarded as an important contributor to philosophy of science. I suppose Feyerabend belongs on the list too, but that's almost too easy (even though, yes, I did have to study Feyerabend in grad school).
Oh, and I could even find material for my steelmanning of young earth creationism in the writings of Robert Pennock, a philosopher of science who testified against Intelligent Design at the Dover trial. Some philosophers, while thinking creationism is dead wrong, have criticized the reasoning used in that and other court decisions that have kept anti-evolutionism out of public schools in the US. Pennock wrote a response, titled "Can’t philosophers tell the difference between science and religion?" where he argued, among other things, that methodological naturalism (MN) is essential to science and supernatural claims are inherently untestable. A relevant quote:
The second misunderstanding arises in a different way, with ID proponents and even some opponents... claiming that science can indeed test the supernatural... For instance, both Laudan and Quinn cite the young-earth creationist view that God created the earth is 6,000 to 10,000 years ago as a hypothesis that is testable and found to be false. But this and other examples that are offered to show the possibility of tests of the supernatural invariably build in naturalistic assumptions that creationists do not share... The point here is that we cannot overlook or ignore, as Laudan and company regularly do, the fact that creationists have a fundamentally different notion from science of what constitutes proper evidential grounds for warranted belief. The young earth view is certainly disconfirmed if we are considering matters under MN, but if one takes the supernatural aspect of the claim seriously, then one loses any ground upon which to test the claim.
So on Pennock's view, testing young-earth creationism and thereby demonstrating it to be false is not possible without relying on naturalistic assumptions. This creates an opening for the creationist to question whether science needs to rely on naturalistic assumptions, and argue that one could create an equally valid version of science based on (fundamentalist) Christian assumptions.
You might conclude Pennock is wrong about this, and young-earth creationism really has been straightforwardly refuted by science, but this creates a different opening for the creationist: argue that if Pennock is wrong, his ideas really shouldn't be the basis of court decisions about whether creationism can be taught in public schools. Evolutionists could respond by arguing for some other philosophical basis for rejecting creationism, but then they'd probably have to make some contentious philosophical claims and we shouldn't be determining what children can learn based on contentious philosophical claims either.
An argument along the above lines could also be used for a different purpose, by someone who rejected creationism but wanted to make a show of being fair-minded towards their opponents and generally more rational than most of their peers. The thing to do is to say that while young earth creationism can be decisively refuted, most scientists and philosophers botch the philosophy required to do that, and this indicates their rejection of creationism is mostly tribalistic, and the young-earth creationists don't actually come out looking so bad by comparison.
Speaking as Chris Hallquist and not some hypothetical alter-ego, I think that if a philosophy of science makes young earth creationism come out looking good, that's a reductio for that philosophy. I think educated people who reject young earth creationism are generally rational to do so, even if their philosophy isn't that hot. Still, I'd be curious to know what else people on LessWrong can come up with in the way of steelmanning young earth creationism.
Background: As can be seen from some of the comments on this post, many people in the LessWrong community take an extreme stance on lying. A few days before I posted this, I was at a meetup where we played the game Resistance, and one guy announced before the game began that he had a policy of never lying even when playing games like that. It's such members of the LessWrong community that this post was written for. I'm not trying to encourage basically honest people with the normal view of white lies that they need to give up being basically honest.
Mr. Potter, you sometimes make a game of lying with truths, playing with words to conceal your meanings in plain sight. I, too, have been known to find that amusing. But if I so much as tell you what I hope we shall do this day, Mr. Potter, you will lie about it. You will lie straight out, without hesitation, without wordplay or hints, to anyone who asks about it, be they foe or closest friend. You will lie to Malfoy, to Granger, and to McGonagall. You will speak, always and without hesitation, in exactly the fashion you would speak if you knew nothing, with no concern for your honor. That also is how it must be.
- Rational!Quirrell, Harry Potter and the Methods of Rationality
This post isn't about HMPOR, so I won't comment on the fictional situation the quote comes from. But in many real-world situations, it's excellent advice.
If you're a gay teenager with homophobic parents, and there's a real chance they'd throw you out on the street if they found out you were gay, you should probably lie to them about it. Even in college, if you're still financially dependent on them, I think it's okay to lie. The minute you're no longer financially dependent on them, you should absolutely come out for your sake and the sake of the world. But it's OK to lie if you need to to keep your education on-track.
Oh, maybe you could get away with just shutting up and hoping the topic doesn't come up. When asked about dating, you could try to evade while being technically truthful: "There just aren't any girls at my school I really like." "What about _____? Why don't you ask her out?" "We're just friends." That might work. But when asked directly "are you gay?" and the wrong answer could seriously screw-up your life, I wouldn't bet too much on your ability to "lie with truths," as Quirrell would say.
I start with this example because the discussions I've seen on the ethics of lying on LessWrong (and everywhere, actually) tend to focus on the extreme cases: the now-cliché "Nazis at the door" example, or even discussion of whether you'd lie with the world at stake. The "teen with homophobic parents" case, on the other hand, might have actually happened to someone you know. But even this case is extreme compared to most of the lies people tell on a regular basis.
Widely-cited statistics claim that the average person lies once per day. I recently saw a new study (that I can't find at the moment) that disputed this, and claimed most people lie rather less often than that, but it still found most people lie fairly often. These lies are mostly "white lies" to, say, spare others' feelings. Most people have no qualms about those kind of lies. So why do discussions of the ethics of lying so often focus on the extreme cases, as if those were the only ones where lying is maybe possibly morally permissible?
At LessWrong there've been discussions of several different views all described as "radical honesty." No one I know of, though, has advocated Radical Honesty as defined by psychotherapist Brad Blanton, which (among other things) demands that people share every negative thought they have about other people. (If you haven't, I recommend reading A. J. Jacobs on Blanton's movement.) While I'm glad no one here is thinks Blanton's version of radical honesty is a good idea, a strict no-lies policy can sometimes have effects that are just as disastrous.
A few years ago, for example, when I went to see the play my girlfriend had done stage crew for, and she asked what I thought of it. She wasn't satisfied with my initial noncommittal answers, so she pressed for more. Not in a "trying to start a fight" way; I just wasn't doing a good job of being evasive. I eventually gave in and explained why I thought the acting had sucked, which did not make her happy. I think incidents like that must have contributed to our breaking up shortly thereafter. The breakup was a good thing for other reasons, but I still regret not lying to her about what I thought of the play.
Yes, there are probably things I could've said in that situation that would have been not-lies and also would have avoided upsetting her. Sam Harris, in his book Lying, spends a lot of arguing against lying in that way: he takes situations where most people would be tempted to tell a white lie, and suggesting ways around it. But for that to work, you need to be good at striking the delicate balance between saying too little and saying too much, and framing hard truths diplomatically. Are people who lie because they lack that skill really less moral than people who are able to avoid lying because they have it?
Notice the signaling issue here: Sam Harris' book is a subtle brag that he has the skills to tell people the truth without too much backlash. This is especially true when Harris gives examples from his own life, like the time he told a friend "No one would ever call you 'fat,' but I think you could probably lose twenty-five pounds." and his friend went and did it rather than getting angry. Conspicuous honesty also overlaps with conspicuous outrage, the signaling move that announces (as Steven Pinker put it) "I'm so talented, wealthy, popular, or well-connected that I can afford to offend you."
If you're highly averse to lying, I'm not going to spend a lot of time trying to convince you to tell white lies more often. But I will implore you to do one thing: accept other people's right to lie to you. About some topics, anyway. Accept that some things are none of your business, and sometimes that includes the fact that there's something which is none of your business.
Or: suppose you ask someone for something, they say "no," and you suspect their reason for saying "no" is a lie. When that happens, don't get mad or press them for the real reason. Among other things, they may be operating on the assumptions of guess culture, where your request means you strongly expected a "yes" and you might not think their real reason for saying "no" was good enough. Maybe you know you'd take an honest refusal well (even if it's "I don't want to and don't think I owe you that"), but they don't necessarily know that. And maybe you think you'd take an honest refusal well, but what if you're lying to yourself?
If it helps to be more concrete: Some men will react badly to being turned down for a date. Some women too, but probably more men, so I'll make this gendered. And also because dealing with someone who won't take "no" for an answer is a scarier experience with the asker is a man and the person saying "no" is a woman. So I sympathize with women who give made-up reasons for saying "no" to dates, to make saying "no" easier.
Is it always the wisest decision? Probably not. But sometimes, I suspect, it is. And I'd advise men to accept that women doing that is OK. Not only that, I wouldn't want to be part of a community with lots of men who didn't get things like that. That's the kind of thing I have in mind when I say to respect other people's right to lie to you.
All this needs the disclaimer that some domains should be lie-free zones. I value the truth and despise those who would corrupt intellectual discourse with lies. Or, as Eliezer once put it:
We believe that scientists should always tell the whole truth about science. It's one thing to lie in everyday life, lie to your boss, lie to the police, lie to your lover; but whoever lies in a journal article is guilty of utter heresy and will be excommunicated.
I worry this post will be dismissed as trivial. I simultaneously worry that, even with the above disclaimer, someone is going to respond, "Chris admits to thinking lying is often okay, now we can't trust anything he says!" If you're thinking of saying that, that's your problem, not mine. Most people will lie to you occasionally, and if you get upset about it you're setting yourself up for a lot of unhappiness. And refusing to trust someone who lies sometimes isn't actually very rational; all but the most prolific liars don't lie anything like half the time, so what they say is still significant evidence, most of the time. (Maybe such declarations-of-refusal-to-trust shouldn't be taken as arguments so much as threats meant to coerce more honesty than most people feel bound to give.)
On the other hand, if we ever meet in person, I hope you realize I might lie to you. Failure to realize a statement could be a white lie can create some terribly awkward situations.
Edits: Changed title, added background, clarified the section on accepting other people's right to lie to you (partly cutting and pasting from this comment).
Edit round 2: Added link to paper supporting claim that the average person lies once per day.
There's been some discussion of how to improve the structure of LessWrong at the site software level - for example adding subreddits or modifying how main and discussion work. One roadblock to this that's been mentioned is a shortage of programmer hours. I'd like to volunteer mine.
I recently finished a course on web development in which, among other things, I build a Reddit clone using Ruby on Rails and Backbone.js. It's been several months since I've written any Python, and I'm somewhat wary of the time required to get familiar with the LessWrong codebase, but I think think the time would be worth it for me: it could potentially improve LessWrong a lot and would let me tick off my "have contributed to an open source project" box.
Of course, before any of that happens, there needs to be some agreement on what changes we think would be a good idea. So... discuss.
EDIT: For context, it's been suggested that part of the benefit of subforums is it could defuse debates over "what topics are appropriate for LessWrong." We could even have an "off-topic" subforum, a common feature of online discussion forums - I think bringing the format of LessWrong more into line with what's standard on other websites could help newbies be less confused here.
There are some things money can't buy. They are the exceptions that prove the rule.
For the pedants, to say something is an exception that proves the rule is to say that when you look at the exceptions, they're so unusual that it reinforces the point that the rule is generally valid even though it isn't universally valid. In the case of money, there's a reason people don't say things like "there are some things hand-knit scarves can't be bartered for" or "Hand-knit scarves can't be bartered for happiness."
Eliezer once described the sequences as the letter he wishes he could have written to his former self. When I think of the letter I wish I could write to my former self, the value of money is at the top of the list of things I'd include.
You can give a cynical, Hansonian explanation of why we don't tell young people enough about the awesomeness of money, and I suppose there'd be some truth to it. But I'm not sure that was my main problem. Growing up, my dad spent a lot of time urging me to go into a high-paying career, to the point giving me advice on what medical specialty to go into. He just didn't do a great job of selling me on it. It wasn't until I learned some economics that I really came to understand why money is so awesome.
On the most recent LessWrong readership survey, I assigned a probability of 0.30 on the cryonics question. I had previously been persuaded to sign up for cryonics by reading the sequences, but this thread and particularly this comment lowered my estimate of the chances of cryonics working considerably. Also relevant from the same thread was ciphergoth's comment:
By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.
Based on this, I think there's a substantial chance that there's information out there that would convince me that the folks who dismiss cryonics as pseudoscience are essentially correct, that the right answer to the survey question was epsilon. I've seen what seem like convincing objections to cryonics, and it seems possible that an expanded version of those arguments, with full references and replies to pro-cryonics arguments, would convince me. Or someone could just go to the trouble of showing that a large majority of cryobiologists really do think cryopreserved people are information-theoretically dead.
However, it's not clear to me how well worth my time it is to seek out such information. It seems coming up with decisive information would be hard, especially since e.g. ciphergoth has put a lot of energy into trying to figure out what the experts think about cryonics and come away without a clear answer. And part of the reason I signed up for cryonics in the first place is because it doesn't cost me much: the largest component is the life insurance for funding, only $50 / month.
So I've decided to put a bounty on being persuaded to cancel my cryonics subscription. If no one succeeds in convincing me, it costs me nothing, and if someone does succeed in convincing me the cost is less than the cost of being signed up for cryonics for a year. And yes, I'm aware that providing one-sided financial incentives like this requires me to take the fact that I've done this into account when evaluating anti-cryonics arguments, and apply extra scrutiny to them.
Note that there are several issues that ultimately go in to whether you should sign up for cryonics (the neuroscience / evaluation of current technology, estimate of the probability of a "good" future, various philosophical issues), I anticipate the greatest chance of being persuaded from scientific arguments. In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues. The offer is blind to the exact nature of the arguments given, but I mostly foresee being persuaded by the neuroscience arguments.
And of course, I'm happy to listen to people tell me why the anti-cryonics arguments are wrong and I should stay signed up for cryonics. There's just no prize for doing so.
If you've been wondering what these posts are doing on LessWrong and you haven't read this comment yet, I urge you to do so. Thanks to commenter FiftyTwo for suggesting I say something like this.
To recap: so taking in more calories than you burn will cause you to gain weight, though calorie intake and expenditure are in turn controlled by a number of mechanisms. This suggests a couple of options for losing weight. You can try to intervene directly in the mechanisms controlling food intake, one of the most well-known examples of this being gastric bypass surgery, admittedly a bit of a drastic option. But intervening at the point of calorie intake is also an option.
Now it turns out that it's relatively easy to lose weight by dieting. That catch is that it's much harder to keep the weight off. A commonly cited rule (for example here) is that most people who lose weight through dieting will regain it all in five years. However, it's important to emphasize that some people do lose weight through dieting and keep it off long-term. An organization called the National Weight Control Registry has made an effort to track those people, and have published quite a few studies based on their work (many of which can be easily found through Google Scholar).
Unfortunately, the NWCR is working with a self-selected sample and asking them what they did after the fact. They're not randomly assigning people to treatments. So for example, a high percentage of the NWCR group reports successful long-term weight loss following low-fat and/or calorie-restricted diets and exercising a lot. And the percentage following low-carb diets was originally small, but it's risen over time. But both of these observations may just reflect the relative popularity of those approaches in the general population.
We may not be able to conclude anything more from the NWCR data than that a significant minority of dieters do succeed at long-term weight loss, some through calorie-restricted diets, some through low-fat diets, and some through low-carb diets. Remember, though, that as discussed in previous posts there's little reason to think low-fat or low-carb diets could cause weight loss except by indirectly affecting energy balance.
And now, one last time, I'm going to talk about what Taubes has to say about this issue. I'm going to quote from Why We Get Fat (pp. 36-38), though Good Calories, Bad Calories contains similar comments, including about the Handbook of Obesity and Joslin's. Taubes begins by citing a review article covering calorie-restricted diets that found that "Typically, nine or ten pounds are lost in the first six months. After a year, much of what was lost has been regained." He also cites a large study that tested a calorie-restricted diet and reached a similar conclusion: participants "lost on average, only nine pounds. And once again... most of the nine pounds came off in the first six months, and most of the participants were gaining weight back after a year."
Based on this, he concludes that "Eating less—that is, undereating—simply doesn't work for more than a few months, if that." Then it's time to really lay in to mainstream nutrition science:
This reality, however, hasn't stopped the authorities from recommending the approach, which makes reading such recommendations an exercise in what psychologists call "cognitive dissonance," the tension that results from trying to hold two incompatible beliefs simultaneously.
Take, for instance, the Handbook of Obesity, a 1998 textbook edited by three of the most prominent authorities in the field—George Bray, Claude Bouchard, and W. P. T. James. "Dietary therapy remains the cornerstone of treatment and the reduction of energy intake continues to be the basis of successful weight reduction programs," the book says. But then it states, a few paragraphs later, that the results of such energy-reduced diets "are known to be poor and not long-lasting." So why is such an ineffective therapy the cornerstone of treatment? The Handbook of Obesity neglects to say.
The latest edition (2005) of Joslin's Diabetes Mellitus, a highly respected textbook for physicians and researchers, is a more recent example of this cognitive dissonance. The chapter on obesity was written by Jeffrey Flier, an obesity researcher who is now dean of Harvard Medical School, and his wife and research colleague, Terry Maratos-Flier. The Fliers also describe "reduction of caloric intake" as "the cornerstone of any therapy for obesity." But then they enumerate all the ways in which this cornerstone fails. After examining approaches from the most subtle reductions in calories (eating, say, one hundred calories less each day with the hope of losing a pound every five weeks) to low-calorie diets of eight hundred to one thousand calories a day to very low-calorie diets (two hundred to six hundred calories) and even total starvation, they conclude that "none of these approaches has any proven merit."
But look at the actual sources and it turns out that, surprise surprise, mainstream experts aren't idiots after all. The second quote from the Handbook of Obesity comes from a paragraph explaining that given how hard obesity is to treat, doctors face a "Shakespearean" dilemma of whether to attempt to treat it at all. The Joslin's article is even clearer (p. 541, emphasis added):
Successful treatment of obesity, defined as treatment that results in sustained attainment of normal body weight and composition without producing unacceptable treatment induced morbidity, is rarely achievable in clinical practice. Many therapeutic approaches can bring about short-term weight loss, but long-term success is infrequent regardless of the approach.
Suppose for a moment that this is true, that long-term weight loss is rare regardless of the approach. If it is, no "cognitive dissonance" is required to recommend treatments that sometimes work. Furthermore, Taubes commits a serious misrepresentation here. Taubes final quote from the Joslin's article, in context, says that, "There are also many programs that recommend specific food combinations or unusual sequences for eating, but none of these approaches has any proven merit." It's pretty obvious in context that the bit Taubes quotes refers only to the programs that recommend specific food combinations or unusual sequences for eating."
It's also worth mentioning that neither of these sources ignore the debate over low-carb diets. The Handbook of Obesity criticizes Atkins-style low carb diets at some length, but also says that, "Moderate restriction of carbohydrates may have real calorie-reducing properties." And the Joslin's article ends up being fairly positive towards low-carb diets in general (p. 542):
Dietary composition may play a role in long-term success in weight loss and weight maintenance. For example, a study comparing a moderate-fat diet consisting of 35% energy from fat and a low-fat diet in which 20% of energy was derived from fat demonstrated enhanced weight loss assessed by total weight loss, BMI change, and decrease in waist circumference in the group on the moderate-fat diet. Retention in the diet study was greater among those actively participating in the weight loss program in this group compared with 20% in the low-fat diet group.
Recently, increased interest has focused on the possibility that diet content may affect appetite. For example, diets with a low glycemic index may be useful in preventing the development of obesity; subjects given test meals with different glycemic indexes and then allowed free access to food ate less after eating meals with a low glycemic index. Some data suggest that diets with a high glycemic index predispose to increased postprandial hunger, whereas diets focused on glycemic index and information regarding portion control lead to higher rates of success in weight loss, at least among adolescent populations. Low-carbohydrate diets such as the Atkins diet appear to be associated with significant weight loss. However, this diet has not been systematically studied, nor has long-term maintenance of weight loss.
I assume the author of the Joslin's article would say, however, that low-carb diets haven't been shown to completely solve the problem of long-term weight loss being really hard. But would they be right about that?
To the best of my knowledge, there have been only two randomized, controlled trials of low-carb diets that have covered a period of two years (and none covering a longer period than that). Taubes has cited both in support of his claims. The first, an Israeli study published in 2008, also also included a group assigned to a Mediterranean diet. Here are the results in terms of weight loss:
So on the one hand, subjects on the low-carb diet did initially lose more weight, about 6.5 kg (14 lbs.) compared to about 4.5 kg (10 lbs.) for the low-calorie diet. On the other hand, both groups started regaining the weight after six months. If, as Taubes claims, data like this shows that low-calorie diets "simply doesn't work for more than a few months," does this data justify saying the same thing about low-carb diets?
Furthermore, if you believe the rule about weight lost to dieting coming back in five years, it seems likely that would happen to both groups. Intriguingly, though, while participants on the Mediterranean diet didn't initially lose as much weight as those on the low-carb diet, the weight regain didn't seem to happen as much on the Mediterranean diet. That makes me wonder what a five-year study of the Mediterranean diet would find.
Note that the Israeli study also found that that participants in all three groups significantly reduced their caloric intake, supporting the hypothesis that even diets that don't explicitly restrict calorie intake work by reducing calorie intake indirectly.
What about the other study, published in 2010, which Taubes has hailed as "the biggest study so far on low-carb diets"? Here are its results (note that the low-fat diet was also a calorie-restricted diet):
That's right, this study found no statistically significant difference between low-fat and low-carb diets in terms of weight loss, and again show the typical pattern of people losing weight in the first six months and then slowly gaining it back. Together, these two studies support the picture painted by Joslin's: low-carb diets may work somewhat better for weight loss, but they don't appear to solve the problem of long-term weight loss being really hard.
One other relevant detail: the second study found that "A significantly greater percentage of participants who consumed the low-carbohydrate than the low-fat diet reported bad breath, hair loss, constipation, and dry mouth." As Taubes' fellow science writer John Horgan has noted, this reveals an apparent inconsistency in how Taubes judges different diets. He goes to great lengths to play up the unpleasantness of calorie-restricted diets, but tells his readers that if they just stick to their low-carb diet theunpleasant side-effects will go away eventually.
So given all this, what should you do if you want to lose weight? I think depends a lot on who you are. I have ethical qualms about consuming animal products, including and in fact especially eggs, which is one strike against low-carb diets for me. Also, while there's some evidence low-carb diets may be better for hunger, my personal experience is that what foods I find filling is kind of random (lentils, black beans, and baguettes all rate highly on the filling-ness measure for me). So maybe just experiment and try to figure out which foods let you personally eat in moderation and not feel hungry. Keep Eliezer's advice in Beware of Other Optimizing in mind, and if one thing doesn't work for you, try something else.
A final point: the truth about weight loss sucks. If your case isn't bad enough to justify something drastic like gastric bypass surgery, your main option is diets which sometimes work but usually don't. Regardless of the approach. Unfortunately, this is not an exciting message to put in a popular book on nutrition. This creates an excellent opportunity for someone like Taubes: imply that if the experts admit they don't have a great solution to the problem, then clearly they don't know what they're talking about, and therefore your solution is sure to work!
Long-time readers of LessWrong, however, will realize that the universe is allowed to throw us problems with no good solution. That's something that may be especially worth keeping in mind when evaluating claims in the vicinity of medicine and nutrition. In a way, Taubes' readers are lucky: following his advice won't kill you, and won't lead to you missing out on any wildly more effective solution. It might have some unpleasant side-effects you could've avoided with another approach, but also might have some advantages. However, I've read enough of the literature on medical quackery to know Taubes' rhetorical tactics can be used for much more dangerous ends.
Just imagine: "It's doctors and pharmaceutical companies that caused your cancer in the first place. That chemotherapy and radiation therapy stuff they're pushing on you is obviously harmful. Don't you now there are all-natural ways you can cure your cancer?" If someone says that to you, then knowing that the universe is unfair, and that sometimes the best solution it gives you to a problem will have serious downsides, well knowing that just might save your life. Or not. Because the universe isn't fair.
Early on in the process of writing this series, I said when it was over with I'd do a post-mortem to look at how I could have broken it up better. However, Vaniver has given me what seems like good advice on that issue, which I plan to follow in the future. (Unless someone else comes along and persuades me otherwise. You're welcome to try that).
But there are other issues here, the big meta-issue being that downvotes don't help me distinguish between people thinking the posts were completely off-topic for Lesswrong vs. not liking how finely they were broken up vs. me not realizing what a hot-button issue obesity is for some people vs. other things. So suggestions on how I could best solicit anonymous feedback would be especially appreciated.
In this post, I'm going to deal with an issue that's central to Gary Taubes' critique of mainstream nutrition science: what causes obesity?
This is a post a post I found exceptionally difficult to write. You see, while his 2002 New York Times article portrays mainstream nutrition science as promoting a simplistic mirror-image of the Atkins diet, his books do manage to talk about the mainstream view that if you consume more calories than you burn you'll gain weight... sort of. As I looked closely at the relevant chapters of those books, it became less and less clear what view he's attributing to mainstream experts, or what his alternative is supposed to be.
Because this discussion may get confusing, I want to start by repeating what I said in my first post: the mainstream view is that people gain weight when they consume more calories than they burn, but both calorie intake and calorie expenditure are regulated by complicated mechanisms we don't fully understand yet.
Yet Taubes goes on at great length about how obesity has other causes beyond simple calorie math as if this were somehow a refutation of mainstream nutrition science. So I'm going to provide a series of quotes from relevant sources to show that the experts are perfectly aware of that fact. All of the following sources are ones Taubes cites as examples of how absurd the views of mainstream nutrition experts supposedly are:
View more: Next