AnnaSalamon comments on Extreme Rationality: It's Not That Great - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (269)
I agree with almost everything here, with the following caveats:
I. The practical benefits we get from (3) are (I think I'm agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.
II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I've seen a few suggestions for "techniques", but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing 'techniques' were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I've tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to "rationality practice", nor would I really know what to do with that half hour if I did. I'd like to know more about what you do and what you think has helped.
III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my "conversion" to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to "convert" as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer's been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn't, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.
IV. Given fifty years to improve the Art, I also wouldn't be surprised with anything from "massive practical help" to "not much help at all". I don't know exactly what you mean by "ridiculously stupid decision-making that most people do", but are you sure it's something that should be solved with x-rationality as opposed to normal rationality?
I'm sure it's something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most "intelligent, science-literate" people do now; most people hardly try at all). As to heuristics and biases, and probability theory... I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.