Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.
In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”. He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.
So far, excellent news, right? A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future. Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.
The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”. Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments. But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be. And they were the sort of errors LW could have helped with. And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.
So, in case it’s useful to others, a review of what LW is useful for.
When you do (and don’t) need epistemic rationality
For some tasks, the world provides rich, inexpensive empirical feedback. In these tasks you hardly need reasoning. Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.
Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it. Better to find some clay and a hands-on sculpting course. The situation is similar for small talk, cooking, selling, programming, and many other useful skills.
Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better. If you miss your first bet, you’re out for good.
There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.
The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]
How to develop epistemic rationality
If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find. For many, although not all, this will mean Less Wrong. Read the sequences. Read the top current conversations. Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards. Find or build an in-person community of aspiring rationalists if you can.
Is it useful to try to read every single comment? Probably not, on the margin; better to read textbooks or to do rationality exercises yourself. But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.
[1] To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.
There are several difficulties with this plan. For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.
For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give. (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.)
The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.
There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under. Regarding retirement investment, just ask your parents or grandparents. Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.
How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.
The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.