Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]
Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.
What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail. Although 6'7" is very tall, is lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).
The trend seems to be that although we know the predictors are correlated with the outcome, freakishly extreme outcomes do not go together with similarly freakishly extreme predictors. Why?
Too much of a good thing?
One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.
I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.
The simple graphical explanation
[Inspired by this essay from Grady Towers]
Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:
It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of quiz time versus test score:
Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker:
The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:
So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not to tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution (2).
Hence the very best basketball players aren't the tallest (and vice versa), the very wealthiest not the smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.
An intuitive explanation of the graphical explanation
It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:
The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.
For a toy model, pretend these height, strength, agility and hand-eye-coordination are independent of one another, gaussian, and additive towards the outcome of basketball ability with equal weight.(3) So, ceritus paribus, being taller will make one better at basketball, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between height and the other attributes, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very tallest shouldn't be the very best.
The intuitive explanation would go like this: Start at the extreme tail - +4SD above the mean for height. Although their 'basketball-score' gets a massive boost from their height, we'd expect them to be average with respect to the other basketball relevant abilities (we've stipulated they're independent). Further, as this ultra-tall population is small, this population won't have a very high variance: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in another factor like agility.
Move down the tail to slightly less extreme values - +3SD say. These people don't get such a boost to their basketball score for their height, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means there is a lot more expected variance in the other basketball relevant activities - it is much less surprising to find someone +3SD in height and also +2SD in agility, and in the world where these things were equally important, they would 'beat' someone +4SD in height but average in the other attributes. Although a +4SD height person will likely be better than a given +3SD height person, the best of the +4SDs will not be as good as the best of the much larger number of +3SDs
The trade-off will vary depending on the exact weighting of the factors, which explain more of the variance, but the point seems to hold in the general case: when looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:
So that's why the tails diverge.
Endnote: EA relevance
I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.
This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)
There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(4) Mainly, though, it should lead us to be less self-confident.
1. One might look at the generally modest achievements of people in high-IQ societies as further evidence, but there are worries about adverse selection.
2. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.
3. If you want to apply it to cases where the factors are positively correlated - which they often are - just use the components of the other factors that are independent of the factor of interest. I think, but I can't demonstrate, the other stipulations could also be relaxed.
4. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.
One of many problems with the contemporary university system is that the same institutions that educate students also give them their degrees and grades. This obviously creates massive incentives for grade inflation and lowering of standards. Giving a thorough education requires hard work not only from students but also from the professors. In the absence of an independent body that tests that the students actually have learnt what they are supposed to have learnt, many professors spend as little time as possible at teaching, giving the students light workloads (something most of them of course happily accept). The faculty/student non-aggression pact is an apt term for this.
To see how absurd this system is, imagine that we would have the same system for drivers' licenses: that the driving schools that train prospective drivers also tested them and issued their drivers' licenses. In such a system, people would most probably chose the most lenient schools, leading to a lowering of standards. For fear of such a lowering of standards, prospective drivers are in many countries (I would guess universally but do not know that for sure) tested by government bodies.
Presumably, the main reason for this is that governments really care about the lowering of drivers' standards. Ensuring that all drivers are appropriately educated (i.e. is seen as very important. By contrast, the governments don't care that much about the lowering of academic standards. If they would, they would long ago have replaced a present grading/certification system with one where students are tested by independent bodies, rather than by the universities themselves.
This is all the more absurd given how much politicians in most countries talk about the importance of education. More often than not they talk about education, especially higher education, as a panacea to cure for all ills. However, if we look at the politicians' actions, rather than at their words, it doesn't seem like they actually do think it's quite as important as they say to ensure that the population is well-educated.
Changing the system for certifying students is important not the least in order to facilitate inventions in higher education. The present system discriminates in favour of traditional campus courses, which are both expensive and fail to teach the students as much as they should. I'm not saying that online courses, and other non-standard courses, are necessarily better or more cost-effective, but they should get the chance to prove that they are.
The system is of course hard to change, since there are lots of vested interests that don't want it to change. This is nicely illustrated by the reactions to a small baby-step towards the system that I'm envisioning that OECD is presently trying to take. Financial Times (which has a paywall, unfortunately) reports that OECD are attempting to introduce Pisa-style tests to compare students from higher education institutions around the world. Third year students would be tested on critical thinking, analytical reasoning, problem solving and written communcation. There would also be discipline-specific trials for economics and engineering.
These attempts have, however, not progressed because of resistance from some universities and member countries. OECD says that the resistance often comes from "the most prestigious institutions, because they have very little to win...and a lot to lose". In contrast, "the greatest supporters are the ones that add the greatest value...many of the second-tier institutes are actually a lot better and they're very keen to get on a level playing field."
I figure that if OECD get enough universities on board, they could start implementing the system without the obstructing top universities. They could also allow students from those universities to take the tests independently. If employers started taking these tests seriously, students would have every reason to take them even if their universities haven't joined. Slowly, these presumably more objective tests, or others like them, would become more important at the cost of the universities' inflated grades. People often try to change institutions or systems directly, but sometimes it is more efficient to build alternative systems, show that their useful to the relevant actors, and start out-competing the dominant system (as discussed in these comments).
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.
I previously mentioned this debate a month ago and predicted that Sean Carroll is unlikely to do very well. The debate happened last Friday and Sean posted his post-debate reflections on his popular blog (the full video will be posted soon). Some excerpts:
I think it went well, although I can easily think of several ways I could have done better. On the substance, my major points were that the demand for “causes” and “explanations” is completely inappropriate for modern fundamental physics/cosmology, and that theism is not taken seriously in professional cosmological circles because it is hopelessly ill-defined (no matter what happens in the universe, you can argue that God would have wanted it that way). He defended two of his favorite arguments, the “cosmological argument” and the fine-tuning argument; no real surprises there. In terms of style, from my perspective things got a bit frustrating, because the following pattern repeated multiple times: Craig would make an argument, I would reply, and Craig would just repeat the original argument.
The cosmological argument has two premises: (1) If the universe had a beginning, it has a transcendent cause; and (2) The universe had a beginning. [...] My attitude toward the above two premises is that (2) is completely uncertain, while the “obvious” one (1) is flat-out false. Or not even false, as I put it, because the notion of a “cause” isn’t part of an appropriate vocabulary to use for discussing fundamental physics. [Emphasis mine]
The Aristotelian analysis of causes is outdated when it comes to modern fundamental physics; what matters is whether you can find a formal mathematical model that accounts for the data.
Sean goes over a couple of mistakes he thinks he made in the debate, basically being blindsided by WLC bringing up obscure papers and misinterpreting them to suit his argument.
Sean's reflections are very detailed and worth reading, though I found them hard to summarize. It looks like WLC did his homework better than SC, but it's hard to tell whether it mattered until the video is made public and various interested parties gave their feedback. Another couple of quotes, with my emphasis:
For my closing statement, I couldn’t think of many responses to Craig’s closing statement that wouldn’t have simply be me reiterating points from my first two speeches. So I took the opportunity to pull back a little and look at the bigger picture. Namely: we’re talking about “God and Cosmology,” but nobody really becomes a believer in God because it provides the best cosmology. They become theists for other reasons, and the cosmology comes later. That’s because religion is enormously more than theism. Most people become religious for other (non-epistemic) reasons: it provides meaning and purpose, or a sense of community, or a way to be in contact with something transcendent, or simply because it’s an important part of their culture. The problem is that theism, while not identical to religion, forms its basis, at least in most Western religions. So — maybe, I suggested, tentatively — that could change. I give theists a hard time for not accepting the implications of modern science, but I am also happy to give naturalists a hard time when they don’t appreciate the enormous task we face in answering all of the questions that we used to think were answered by God. [...]
To me, Craig’s best moment of the weekend came at the very end, as part of the summary panel discussion. Earlier in the day, Tim Maudlin (who gave an great pro-naturalism talk, explaining that God’s existence wouldn’t have any moral consequences even if it were true) had grumped a little bit about the format. His point was that formal point-counterpoint debates aren’t really the way philosophy is done, which would be closer to a Socratic discussion where issues can be clarified and extended more efficiently. And I agree with that, as far as it goes. But Craig had a robust response, which I also agree with: yes, a debate like this isn’t how philosophy is done, but there are things worth doing other than philosophy, or even teaching philosophy. He said, candidly, that the advantage of the debate format is that it brings out audiences, who find a bit of give-and-take more exciting than a lecture or series of lectures. It’s hard to teach subtle and tricky concepts in such a format, but that’s always a hard thing to do; the point is that if you get the audience there in the first place, a good debater can at least plant a few new ideas in their heads, and hopefully inspire them to take the initiative and learn more on their own.
Sean concurs: "If we think we have good ideas, we should do everything we can to bring them to as many people as possible."
I hope Luke or someone else will find time to watch the video once posted and give their impressions.
This is the third post in a series discussing my recent bout of productivity. Within, I discuss two techniques I use to avoid akrasia and one technique I use to be especially productive.
I like to pretend that I have higher-than-normal willpower, because my ability to Get Things Done seems to be somewhat above average. In fact, this is not the case. I'm not good at fighting akrasia. I merely have a knack for avoiding it.
When I was young, my parents were very good at convincing me to manage my money. They gave me an allowance, perhaps a dollar a week. When we would go to the store, I'd get excited about some trite toy and ask my parents whether I could buy it.
Their answers were similar. My mother would crouch down, put a hand on my shoulder, and say "Of course you can. But before you do, think carefully about how much you will enjoy this after you've bought it, and what other things you would be able to buy if instead you saved up."
My father was a bit more direct. He'd just shrug and say "It's your money", with the barest hint of derision.
I rarely spent my allowance.
I now use a similar technique when dealing with distractions.
(It's worth noting that it's always been very easy to put me into far mode, perhaps in part because I decided at a very young age that I wasn't going to die.)
As Kaj Sotala and a few others noted, assigning guilt to non-productive tasks is not especially healthy. Nor is it, in my experience, sustainable. In a few different cases, I experienced scenarios where I wanted to do something but couldn't will myself to do it. I suffered ego depletion and hit a vicious cycle of unproductivity and depression. I never fell completely into the self-hate death spiral, but I flirted around at the edges. It became clear that I needed a new strategy.
To break the cycle, I decided to stop fighting myself.
On the most recent LessWrong readership survey, I assigned a probability of 0.30 on the cryonics question. I had previously been persuaded to sign up for cryonics by reading the sequences, but this thread and particularly this comment lowered my estimate of the chances of cryonics working considerably. Also relevant from the same thread was ciphergoth's comment:
By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.
Based on this, I think there's a substantial chance that there's information out there that would convince me that the folks who dismiss cryonics as pseudoscience are essentially correct, that the right answer to the survey question was epsilon. I've seen what seem like convincing objections to cryonics, and it seems possible that an expanded version of those arguments, with full references and replies to pro-cryonics arguments, would convince me. Or someone could just go to the trouble of showing that a large majority of cryobiologists really do think cryopreserved people are information-theoretically dead.
However, it's not clear to me how well worth my time it is to seek out such information. It seems coming up with decisive information would be hard, especially since e.g. ciphergoth has put a lot of energy into trying to figure out what the experts think about cryonics and come away without a clear answer. And part of the reason I signed up for cryonics in the first place is because it doesn't cost me much: the largest component is the life insurance for funding, only $50 / month.
So I've decided to put a bounty on being persuaded to cancel my cryonics subscription. If no one succeeds in convincing me, it costs me nothing, and if someone does succeed in convincing me the cost is less than the cost of being signed up for cryonics for a year. And yes, I'm aware that providing one-sided financial incentives like this requires me to take the fact that I've done this into account when evaluating anti-cryonics arguments, and apply extra scrutiny to them.
Note that there are several issues that ultimately go in to whether you should sign up for cryonics (the neuroscience / evaluation of current technology, estimate of the probability of a "good" future, various philosophical issues), I anticipate the greatest chance of being persuaded from scientific arguments. In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues. The offer is blind to the exact nature of the arguments given, but I mostly foresee being persuaded by the neuroscience arguments.
And of course, I'm happy to listen to people tell me why the anti-cryonics arguments are wrong and I should stay signed up for cryonics. There's just no prize for doing so.
If you've been wondering what these posts are doing on LessWrong and you haven't read this comment yet, I urge you to do so. Thanks to commenter FiftyTwo for suggesting I say something like this.
To recap: so taking in more calories than you burn will cause you to gain weight, though calorie intake and expenditure are in turn controlled by a number of mechanisms. This suggests a couple of options for losing weight. You can try to intervene directly in the mechanisms controlling food intake, one of the most well-known examples of this being gastric bypass surgery, admittedly a bit of a drastic option. But intervening at the point of calorie intake is also an option.
Now it turns out that it's relatively easy to lose weight by dieting. That catch is that it's much harder to keep the weight off. A commonly cited rule (for example here) is that most people who lose weight through dieting will regain it all in five years. However, it's important to emphasize that some people do lose weight through dieting and keep it off long-term. An organization called the National Weight Control Registry has made an effort to track those people, and have published quite a few studies based on their work (many of which can be easily found through Google Scholar).
Unfortunately, the NWCR is working with a self-selected sample and asking them what they did after the fact. They're not randomly assigning people to treatments. So for example, a high percentage of the NWCR group reports successful long-term weight loss following low-fat and/or calorie-restricted diets and exercising a lot. And the percentage following low-carb diets was originally small, but it's risen over time. But both of these observations may just reflect the relative popularity of those approaches in the general population.
We may not be able to conclude anything more from the NWCR data than that a significant minority of dieters do succeed at long-term weight loss, some through calorie-restricted diets, some through low-fat diets, and some through low-carb diets. Remember, though, that as discussed in previous posts there's little reason to think low-fat or low-carb diets could cause weight loss except by indirectly affecting energy balance.
And now, one last time, I'm going to talk about what Taubes has to say about this issue. I'm going to quote from Why We Get Fat (pp. 36-38), though Good Calories, Bad Calories contains similar comments, including about the Handbook of Obesity and Joslin's. Taubes begins by citing a review article covering calorie-restricted diets that found that "Typically, nine or ten pounds are lost in the first six months. After a year, much of what was lost has been regained." He also cites a large study that tested a calorie-restricted diet and reached a similar conclusion: participants "lost on average, only nine pounds. And once again... most of the nine pounds came off in the first six months, and most of the participants were gaining weight back after a year."
Based on this, he concludes that "Eating less—that is, undereating—simply doesn't work for more than a few months, if that." Then it's time to really lay in to mainstream nutrition science:
This reality, however, hasn't stopped the authorities from recommending the approach, which makes reading such recommendations an exercise in what psychologists call "cognitive dissonance," the tension that results from trying to hold two incompatible beliefs simultaneously.
Take, for instance, the Handbook of Obesity, a 1998 textbook edited by three of the most prominent authorities in the field—George Bray, Claude Bouchard, and W. P. T. James. "Dietary therapy remains the cornerstone of treatment and the reduction of energy intake continues to be the basis of successful weight reduction programs," the book says. But then it states, a few paragraphs later, that the results of such energy-reduced diets "are known to be poor and not long-lasting." So why is such an ineffective therapy the cornerstone of treatment? The Handbook of Obesity neglects to say.
The latest edition (2005) of Joslin's Diabetes Mellitus, a highly respected textbook for physicians and researchers, is a more recent example of this cognitive dissonance. The chapter on obesity was written by Jeffrey Flier, an obesity researcher who is now dean of Harvard Medical School, and his wife and research colleague, Terry Maratos-Flier. The Fliers also describe "reduction of caloric intake" as "the cornerstone of any therapy for obesity." But then they enumerate all the ways in which this cornerstone fails. After examining approaches from the most subtle reductions in calories (eating, say, one hundred calories less each day with the hope of losing a pound every five weeks) to low-calorie diets of eight hundred to one thousand calories a day to very low-calorie diets (two hundred to six hundred calories) and even total starvation, they conclude that "none of these approaches has any proven merit."
But look at the actual sources and it turns out that, surprise surprise, mainstream experts aren't idiots after all. The second quote from the Handbook of Obesity comes from a paragraph explaining that given how hard obesity is to treat, doctors face a "Shakespearean" dilemma of whether to attempt to treat it at all. The Joslin's article is even clearer (p. 541, emphasis added):
Successful treatment of obesity, defined as treatment that results in sustained attainment of normal body weight and composition without producing unacceptable treatment induced morbidity, is rarely achievable in clinical practice. Many therapeutic approaches can bring about short-term weight loss, but long-term success is infrequent regardless of the approach.
Suppose for a moment that this is true, that long-term weight loss is rare regardless of the approach. If it is, no "cognitive dissonance" is required to recommend treatments that sometimes work. Furthermore, Taubes commits a serious misrepresentation here. Taubes final quote from the Joslin's article, in context, says that, "There are also many programs that recommend specific food combinations or unusual sequences for eating, but none of these approaches has any proven merit." It's pretty obvious in context that the bit Taubes quotes refers only to the programs that recommend specific food combinations or unusual sequences for eating."
It's also worth mentioning that neither of these sources ignore the debate over low-carb diets. The Handbook of Obesity criticizes Atkins-style low carb diets at some length, but also says that, "Moderate restriction of carbohydrates may have real calorie-reducing properties." And the Joslin's article ends up being fairly positive towards low-carb diets in general (p. 542):
Dietary composition may play a role in long-term success in weight loss and weight maintenance. For example, a study comparing a moderate-fat diet consisting of 35% energy from fat and a low-fat diet in which 20% of energy was derived from fat demonstrated enhanced weight loss assessed by total weight loss, BMI change, and decrease in waist circumference in the group on the moderate-fat diet. Retention in the diet study was greater among those actively participating in the weight loss program in this group compared with 20% in the low-fat diet group.
Recently, increased interest has focused on the possibility that diet content may affect appetite. For example, diets with a low glycemic index may be useful in preventing the development of obesity; subjects given test meals with different glycemic indexes and then allowed free access to food ate less after eating meals with a low glycemic index. Some data suggest that diets with a high glycemic index predispose to increased postprandial hunger, whereas diets focused on glycemic index and information regarding portion control lead to higher rates of success in weight loss, at least among adolescent populations. Low-carbohydrate diets such as the Atkins diet appear to be associated with significant weight loss. However, this diet has not been systematically studied, nor has long-term maintenance of weight loss.
I assume the author of the Joslin's article would say, however, that low-carb diets haven't been shown to completely solve the problem of long-term weight loss being really hard. But would they be right about that?
To the best of my knowledge, there have been only two randomized, controlled trials of low-carb diets that have covered a period of two years (and none covering a longer period than that). Taubes has cited both in support of his claims. The first, an Israeli study published in 2008, also also included a group assigned to a Mediterranean diet. Here are the results in terms of weight loss:
So on the one hand, subjects on the low-carb diet did initially lose more weight, about 6.5 kg (14 lbs.) compared to about 4.5 kg (10 lbs.) for the low-calorie diet. On the other hand, both groups started regaining the weight after six months. If, as Taubes claims, data like this shows that low-calorie diets "simply doesn't work for more than a few months," does this data justify saying the same thing about low-carb diets?
Furthermore, if you believe the rule about weight lost to dieting coming back in five years, it seems likely that would happen to both groups. Intriguingly, though, while participants on the Mediterranean diet didn't initially lose as much weight as those on the low-carb diet, the weight regain didn't seem to happen as much on the Mediterranean diet. That makes me wonder what a five-year study of the Mediterranean diet would find.
Note that the Israeli study also found that that participants in all three groups significantly reduced their caloric intake, supporting the hypothesis that even diets that don't explicitly restrict calorie intake work by reducing calorie intake indirectly.
What about the other study, published in 2010, which Taubes has hailed as "the biggest study so far on low-carb diets"? Here are its results (note that the low-fat diet was also a calorie-restricted diet):
That's right, this study found no statistically significant difference between low-fat and low-carb diets in terms of weight loss, and again show the typical pattern of people losing weight in the first six months and then slowly gaining it back. Together, these two studies support the picture painted by Joslin's: low-carb diets may work somewhat better for weight loss, but they don't appear to solve the problem of long-term weight loss being really hard.
One other relevant detail: the second study found that "A significantly greater percentage of participants who consumed the low-carbohydrate than the low-fat diet reported bad breath, hair loss, constipation, and dry mouth." As Taubes' fellow science writer John Horgan has noted, this reveals an apparent inconsistency in how Taubes judges different diets. He goes to great lengths to play up the unpleasantness of calorie-restricted diets, but tells his readers that if they just stick to their low-carb diet theunpleasant side-effects will go away eventually.
So given all this, what should you do if you want to lose weight? I think depends a lot on who you are. I have ethical qualms about consuming animal products, including and in fact especially eggs, which is one strike against low-carb diets for me. Also, while there's some evidence low-carb diets may be better for hunger, my personal experience is that what foods I find filling is kind of random (lentils, black beans, and baguettes all rate highly on the filling-ness measure for me). So maybe just experiment and try to figure out which foods let you personally eat in moderation and not feel hungry. Keep Eliezer's advice in Beware of Other Optimizing in mind, and if one thing doesn't work for you, try something else.
A final point: the truth about weight loss sucks. If your case isn't bad enough to justify something drastic like gastric bypass surgery, your main option is diets which sometimes work but usually don't. Regardless of the approach. Unfortunately, this is not an exciting message to put in a popular book on nutrition. This creates an excellent opportunity for someone like Taubes: imply that if the experts admit they don't have a great solution to the problem, then clearly they don't know what they're talking about, and therefore your solution is sure to work!
Long-time readers of LessWrong, however, will realize that the universe is allowed to throw us problems with no good solution. That's something that may be especially worth keeping in mind when evaluating claims in the vicinity of medicine and nutrition. In a way, Taubes' readers are lucky: following his advice won't kill you, and won't lead to you missing out on any wildly more effective solution. It might have some unpleasant side-effects you could've avoided with another approach, but also might have some advantages. However, I've read enough of the literature on medical quackery to know Taubes' rhetorical tactics can be used for much more dangerous ends.
Just imagine: "It's doctors and pharmaceutical companies that caused your cancer in the first place. That chemotherapy and radiation therapy stuff they're pushing on you is obviously harmful. Don't you now there are all-natural ways you can cure your cancer?" If someone says that to you, then knowing that the universe is unfair, and that sometimes the best solution it gives you to a problem will have serious downsides, well knowing that just might save your life. Or not. Because the universe isn't fair.
Early on in the process of writing this series, I said when it was over with I'd do a post-mortem to look at how I could have broken it up better. However, Vaniver has given me what seems like good advice on that issue, which I plan to follow in the future. (Unless someone else comes along and persuades me otherwise. You're welcome to try that).
But there are other issues here, the big meta-issue being that downvotes don't help me distinguish between people thinking the posts were completely off-topic for Lesswrong vs. not liking how finely they were broken up vs. me not realizing what a hot-button issue obesity is for some people vs. other things. So suggestions on how I could best solicit anonymous feedback would be especially appreciated.
The trick of saying "yes" instead of "no" is *not* to say less often "no" at the cost at allowing things when you say "yes". That just trades the stress of saying "no" (standing consequently despite clash of wills) against the effort to fulfill, monitor, pay or clean up after the "yes".
Soft paternalism applied to parenting means saying "Yes, but" or "Yes, later" or "Yes, if". This signals to the child that you understand his/her wish but also supplies some context the child may not be aware of. It reduces your cost of saying "yes" at the expense of a cost to cash in the "yes" for the child.
- There is a substantial flaw or missing element to my model that someone will point out.
- Many readers, who are bad at small talk because they don't see the point, will get better at it as a result of acquiring understanding.
I think most of us are familiar with the common semantic stopsigns like "God", "just because", and "it's a tradition." However, I've recently been noticing more interesting ones that I haven't really seen discussed on LW. (Or it's also likely that I missed those discussion.)
The first one is "humans are stupid." I notice this one very often, in particular in LW and other rationalist communities. The obvious problem here is that humans are not that stupid. Often what might seem like sheer stupidity was caused by a rather reasonable chain of actions and events. And even if a person or a group of people is being stupid, it's very interesting to chase down the cause. That's how you end up discovering biases from scratch or finding a great opportunity.
The second semantic stopsign is "should." Hat tip to Michael Vassar for bringing this one up. If you and I have a discussing about how I eat too much chocolate, and I say, "You are right, I should eat less chocolate," the conversation will basically end there. But 99 times out of a 100 nothing will actually come out of it. I try to taboo the word "should" from my vocabulary, so instead I will say something like, "You are right, I will not purchase any chocolate this month." This is a concrete actionable statement.
What other semantic stopsigns have you noticed in yourself and others?
View more: Next