aaronsw comments on "Epiphany addiction" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Effective at what? I agree with Yvain that:
Hard work, intelligence, social skill, attractiveness, risk-taking, need for sleep, height, and enormous amounts of noise go into life success as measured by something like income or occupational status. So unless there were a ludicrously large effect size of hanging around Less Wrong, differences in life success between readers and nonreaders would be overwhelmingly driven by selection effects. Now, in fact those selection effects put the LW population well above average (lots of college students, academics, software engineers, etc) but don't speak much to positive effects of their reading habits.
To get a good picture of that you would need a randomized experiment, or at least a 'natural experiment.' CFAR is going to tack some outcomes on the attendees of its minicamps, after using randomized admission among applicants above a certain cutoff. Due to the limited sample size, I think this only has enough power to detect insanely massive intervention effects, i.e. a boost of a large fraction of a standard deviation from a few days at a workshop. So I think it won't show positive effects there. It does seem plausible to me, however, that there will be positive effects on narrow measures closer to the intervention, e.g. performance on some measures of cognitive bias from the psychology literature.
In the same way, a scheduling system like Getting Things Done will probably not have visibly significant effects on career outcomes within a year on a small sample size, but would be more likely to do so on a measure like "projects delivered on time" or "average time-to-response for emails."
For someone interested in personal success, a more relevant standard would be whether n hours spent studying or practicing 'rationality exercises' would increase income or other success measures more than taking an extra programming class at Udacity, or working out at the gym, or reading up about financial planning and investment. Here, I'm less certain about the outcome, although my intuition is that rationality exercises would come out behind. The educational literature shows that transfer learning is generally poor, so better to do focused work on the areas of interest, which may include domain-specific heuristics of rational behavior.
And that is for exercises selected to be relatively useful in everyday life. Looking at Eliezer Yudkowsky's sequences much of the content is very far from that: meta-ethics, philosophy of mind, avoiding verbal disputes, an account of welfare for future utopias or dystopias, quantum mechanics (the connection to cryonics at the end is dubious, and a small expected benefit that can't be pinned down today), determinism, much of the sequence on avoiding merely verbal disputes, and so forth. I wouldn't expect big improvements in everyday life from those any more than I would from reading pop-science articles or philosophy textbooks.
If there are big effects from exercises on epistemic rationality, I would expect to see them in areas that normally aren't the subject of much effort, or are the subject of active self-deception, like self-assessments of driving skill, or avoiding asymmetric ("myside") judgments of media bias, or noticing flaws in one's theology. That may help improve aggregate outcomes in areas like politics or charity where people more often indulge in epistemic irrationality for pleasure, laziness, or signalling, but won't be earthshaking on an individual level. But even here, most new lesson plans don't work well, students don't retain that much, and the interventions in the academic literature show mostly modest effect sizes. So I would expect these gains to be small-to-moderate.
Yvain's argument was that "x-rationality" (roughly the sort of thing that's taught in the Sequences) isn't practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can't have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
What effect size, assessed how, against what counterfactuals? If it's just "I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right" we're in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren't very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data.
Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what's your distribution of effect sizes?