Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.
In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”. He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.
So far, excellent news, right? A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future. Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.
The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”. Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments. But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be. And they were the sort of errors LW could have helped with. And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.
So, in case it’s useful to others, a review of what LW is useful for.
When you do (and don’t) need epistemic rationality
For some tasks, the world provides rich, inexpensive empirical feedback. In these tasks you hardly need reasoning. Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.
Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it. Better to find some clay and a hands-on sculpting course. The situation is similar for small talk, cooking, selling, programming, and many other useful skills.
Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better. If you miss your first bet, you’re out for good.
There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.
The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]
How to develop epistemic rationality
If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find. For many, although not all, this will mean Less Wrong. Read the sequences. Read the top current conversations. Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards. Find or build an in-person community of aspiring rationalists if you can.
Is it useful to try to read every single comment? Probably not, on the margin; better to read textbooks or to do rationality exercises yourself. But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.
[1] To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.
There are several difficulties with this plan. For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.
For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give. (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.)
The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.
I haven't been specific in what I said. There actually have been some posts that introduced me to new concepts and allowed me to feel more satisfied to believe certain things. Only because of LW I was able to compile this curriculum. Although there are many more insightful and novel comments in my opinion than there are posts. I don't want to appear arrogant here or downplay the value of Less Wrong. I actually believe it is one of the most important resources. I just haven't read enough of LW yet to notice any capital contradictions. That also means that there might be great insights I haven't come across yet. I also don't think that most ideas here need much support (the top-ranked post seems to be an outlier). But take a look at some popular posts, where do you disagree or what have they taught you that you didn't already come up with on your own? Take for example the Ugh fields. Someone like me who managed to abandon religion without any help on his own reads that post, agrees wholeheartedly and upvotes it. But has it helped me? No, I'm rather a person that naturally takes this attitude too serious, I consciously overthink things until I completely leave near-mode and operate in far-mode only. I thought your post on self-fulfilling correlations was awesome. But there was no novel insight for me in it either. I know lots of people who should read your post and would benefit from it a lot. But such people won't read it. People like me who visit a psychologist because they know they need help won't be surprised by the movie Contact when Jodie Foster admits it could have all been some illusion. People like me are naturally aware that they could be dreaming. Doubt and the possibility of self-delusion are fundamental premises. But the people who'd really have to go to a psychologist, or read Less Wrong, believe they are perfectly normal or don't need to be told anything.
What I'm trying to say is that if Less Wrong wants to change the world rather than being a place where hyper-rationalists can collectively pat their back, you need to think about how to reach the people who need to know about it. And you need feedback, you have to figure out why people like Ben Goertzel fail to share some conclusions being made here and update accordingly.
Were there ever any references identifying the Scary Idea as an official SIAI belief?
I think that - if they comment at all - they would come back with something like: