Note/edit: I'm imagining explaining this to a friend or family member who is at least somewhat charitable and trusting of my judgement. I am not imagining simply putting this on the About page. I should have made this clear from the beginning - my bad. However, I do believe that some (but not all) of the design decisions would be effective on something like the About page as well.
There's this guy named Eliezer Yudkowsky. He's really, really smart. He founded MIRI, wrote a popular fanfic of Harry Potter that centers around rationality, and has a particularly strong background in AI, probability theory, and decision theory. There's another guy named Robin Hanson. Hanson is an economics professor at George Mason, and has a background in physics, AI and statistics. He's also really, really smart.
Yudkowsky and Hanson started a blog called Overcoming Bias in November of 2006. They blogged about rationality. Later on, Yudkowsky left Overcoming Bias and started his own blog - LessWrong.
What is rationality? Well, for starters, it's incredibly interdisciplinary. It involves academic fields like probability theory, decision theory, logic, evolutionary psychology, cognitive biases, lots of philosophy, and AI. The goal of rationality is to help you be right about the things you believe. In other words, the goal of rationality is to be wrong less often. To be LessWrong.
Weird? Useful?
LessWrong may seem fringe-y and cult-y, but the teachings are usually things that aren't controversial at all. Again, rationality teaches you things like probability theory and evolutionary psychology. Things that academics all agree on. Things that academics have studied pretty thoroughly. Sometimes the findings haven't made it to mainstream culture yet, but they're almost always things that the experts all agree on and consider to be pretty obvious. These aren't some weird nerds cooped up in their parents basement preaching crazy ideas they came up with. These are early adopters who are taking things that have already been discovered, bringing them together, and showing us how the findings could help us be wrong less frequently.
Rationalists tend to be a little "weird" though. And they tend to believe a lot of "weird" things. A lot of science-fiction-y things. They believe we're going to blend with robots and become transhumans soon. They believe that we may be able to freeze ourselves before we die, and then be revived by future generations. They believe that we may be able to upload our consciousness to a computer and live as a simulation. They believe that computers are going to become super powerful and completely take over the world.
Personally, I don't understand these things well enough to really speak to their plausibility. My impression so far is that rationalists have very good reasons for believing what they believe, and that they're probably right. But perhaps you don't share this impression. Perhaps you think those conclusions are wacky and ridiculous. Even if you think this, it's still possible that the techniques may be useful to you, right? It's possible that rationalists have misapplied the techniques in some ways, but that if you learn the techniques and add them to your arsenal, they'll help you level up. Consider this before writing rationality off as wacky.
Overview
So, what does rationality teach you? Here's my overview:
- The difference between reality, and our models of reality (see map vs. territory).
- That things are there components. Airplanes are made up of quarks. "Airplane" is a concept we created to model reality.
- To think in gray. To say, "I sense that x is true" or "I'm pretty sure that x is true" instead of "X is true".
- To update your beliefs incrementally. To say, "I still don't think X is true, but now that you've showed me Y, I'm somewhat less confident." On the other hand, a Black And White Thinker would say, "Eh, even though you showed me Y, I still just don't think X is true."
- How much we should actually update our beliefs when we come across a new observation. A little? A lot? Bayes' theorem has the answers. It is a fundamental component of rationality.
- That science, as an institution, prevents you from updating your beliefs quickly enough. Why? Because it requires a lot of good data before you're allowed to update your beliefs at all. Even just a little bit. Of course you shouldn't update too much with bad data, but you should still nudge your beliefs a bit in the direction that the data point toward.
- To make your beliefs about things that are actually observable. Think: if a tree falls in a forest and no one hears it, does it make a sound? Adding this technique to your arsenal will help you make sense of a lot of philosophical dilemmas.
- To make decisions based on consequences. To distinguish between your end goal, and the stepping stones you must pass on your way there. People often forget what it is that they are actually pursuing, and get tricked into pursuing the stepping stones alone. Ex. getting too caught up moving up the career ladder.
- How evolution really works, and how it helps explain why we are the way we are today. Hint: it's slow and stupid.
- How quantum physics really works.
- How words can be wrong.
- Utilitarian ethics.
- That you have A LOT of biases. And that by understanding them, you could side-step the pain that they would otherwise have caused you.
- Similarly, that you have A LOT of "failure modes", and that by understanding them, you could side-step a lot of the pain that they would otherwise have caused you.
- Lots of healthy mindsets you should take. For example:
- Tsuyoku Naratai - "I want to become stronger!"
- Notice when you're confused.
- Recognize that being wrong is exciting, and something you should embrace - it means you are about to learn something new and level up!
- Don't just believe the opposite of what your stupid opponent believes out of frustration and spite. Sometimes they're right for the wrong reasons. Sometimes there's a third alternative you're not considering.
- To give something a fair chance, be sure to think about it for five minutes by the clock.
- When you're wrong, scream "OOPS!". That way, you could just move on in the right direction immediately. Don't just make minor concessions and rationalize why you were only partially wrong.
- Don't be content with just trying. You'll give up too early if you do that.
- "Impossible" things are often not actually impossible. Consider how impossible wireless communication would seem to someone who lived 500 years ago. Try studying something for a year or five before you claim that it is impossible.
- Don't say things to sound cool, say them because they're true. Don't be overly humble. Don't try to sound wise by being overly neutral and cautious.
- "Mere reality" is actually pretty awesome. You could vibrate air molecules in an extremely, extremely precise way, such that you could take the contents of your mind and put them inside another persons mind? What???? Yeah. It's called talking.
- Shut up and calculate. Sometimes things aren't intuitive, and you just have to trust the math.
- It doesn't matter how good you are relative to others, it matters how good you are in an absolute sense. Reality doesn't grade you on a curve.
Sound interesting? Good! It is!
Eliezer wrote about all of this stuff in bite sized blog posts. About one per day. He claims it helps him write faster as opposed to writing one big book. Originally, the collection of posts were referred to as The Sequences, and were organized into categories. More recently, the posts were refined and brought together into a book - Rationality: From AI to Zombies.
Personally, I believe the writing is dense and difficult to follow. Things like AI are often used as examples in places where a more accessible example could have been used instead. Eliezer himself confesses that he needs to "aim lower". Still, the content is awesome, insightful, and useful, so if you could make your way past some of the less clear explanations, I think you have a lot to gain. Personally, I find the Wiki and the article summaries to be incredibly useful. There's also HPMOR - a fanfic Eliezer wrote to describe the teachings of rationally in a more accessible way.
Gaps
So far, there hasn't been enough of a focus on applying rationality to help you win in everyday life. Instead, it's been focusing on solving big, difficult, theoretical problems. Eliezer mentions this in the preface of Rationality: From AI to Zombies. Developing the more practical, applied part of The Art is definitely something that needs to be done.
Learning how to rationally work in groups is another thing that really needs to be done. Unfortunately, rationalists aren't particularly good at working together. Yet.
Community
From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey). Readers live throughout the globe, and tend to be come from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc. crowd. There are also a lot of effective altruists - people who try to do good for the world, and who try to do so as efficiently as possible. See the wiki's FAQ for results of these surveys.
There are meet-ups in many cities, and in many countries. Berkeley is considered to be the "hub". See How to Run a Successful LessWrong Meetup for a sense of what these meet-ups are like. Additionally, there is a Slack group, and an online study hall. Both are pretty active.
Community members mostly agree with the material described in The Sequences. This common jumping off point makes communication smoother and more productive. And often more fulfilling.
The culture amongst LessWrongians is something that may take some getting used to. Community members tend to:
- Be polyamorous.
- Drink Soylent.
- Communicate explicitly. Eg. "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
- Be a bit socially awkward (about 1/4 are on the autism spectrum).
- Use lots of odd expressions.
In addition... they're totally awesome! In my experience, I've found them to be particularly, caring, altruistic, empathetic, open-minded, good at communicating, humble, intelligent, interesting, reasonable, hard working, respectful and honest. Those are the kinds of people I'd like to spend my time amongst.
Diaspora
LessWrong isn't nearly as active as it used to be. In "the golden era", Eliezer along with a group of other core contributors would post insightful things many times each week. Now, these core contributors have fled to work on their own projects and do their own things. There is much less posting on lesswrong.com than there used to be, but there is still some. And there is still related activity elsewhere. See the wiki's FAQ for more.
Related Organizations
MIRI - Tries to make sure AI is nice to humans.
CFAR - Runs workshops that focuses on being useful to people in their everyday lives.
Meta:
Of course, I may have misunderstood certain things. Ex. I don't feel that I have a great grasp on bayesianism vs. science. If so, please let me know.
Note: in some places, I exaggerated slightly for the sake of a smoother narrative. I don't feel that the exaggerations interfere with the spirit of the points made (DH6). If you disagree, please let me know by commenting.
Having spent years thinking about this and having the opportunity to talk with open minded, intelligent, successful people in social groups, extended family etc. I concluded that most explicit discussion of the value of inquiring into values and methods (scope sensitivity and epistemological rigor being two of the major threads of what applied rationality looks like) just works incredibly rarely, and only then if there is strong existing interest.
Taking ideas seriously and trusting your own reasoning methods as a filter is a dangerous, high variance move that most people are correct to shy away from. My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.
I eventually realized that what I was really communicating to people's system 1 was something like "Hey, you know those methods of judgment like proxy measures of legitimacy and mimesis that have granted you a life you like and that you want to remain stable? Those are bullshit, throw them away and start using these new methods of judgment advocated by a bunch of people who aren't leading lives resembling the one you are optimizing for."
This has not resulted in many sales. It is unrealistic to expect to convert a significant fraction of the tribe to shamanism.
Maybe a side note, but it's not obvious to me that
is in general true, whether normatively or empirically.