Note/edit: I'm imagining explaining this to a friend or family member who is at least somewhat charitable and trusting of my judgement. I am not imagining simply putting this on the About page. I should have made this clear from the beginning - my bad. However, I do believe that some (but not all) of the design decisions would be effective on something like the About page as well.


There's this guy named Eliezer Yudkowsky. He's really, really smart. He founded MIRI, wrote a popular fanfic of Harry Potter that centers around rationality, and has a particularly strong background in AI, probability theory, and decision theory. There's another guy named Robin Hanson. Hanson is an economics professor at George Mason, and has a background in physics, AI and statistics. He's also really, really smart.

Yudkowsky and Hanson started a blog called Overcoming Bias in November of 2006. They blogged about rationality. Later on, Yudkowsky left Overcoming Bias and started his own blog - LessWrong.

What is rationality? Well, for starters, it's incredibly interdisciplinary. It involves academic fields like probability theory, decision theory, logic, evolutionary psychology, cognitive biases, lots of philosophy, and AI. The goal of rationality is to help you be right about the things you believe. In other words, the goal of rationality is to be wrong less often. To be LessWrong.

Weird? Useful?

LessWrong may seem fringe-y and cult-y, but the teachings are usually things that aren't controversial at all. Again, rationality teaches you things like probability theory and evolutionary psychology. Things that academics all agree on. Things that academics have studied pretty thoroughly. Sometimes the findings haven't made it to mainstream culture yet, but they're almost always things that the experts all agree on and consider to be pretty obvious. These aren't some weird nerds cooped up in their parents basement preaching crazy ideas they came up with. These are early adopters who are taking things that have already been discovered, bringing them together, and showing us how the findings could help us be wrong less frequently.

Rationalists tend to be a little "weird" though. And they tend to believe a lot of "weird" things. A lot of science-fiction-y things. They believe we're going to blend with robots and become transhumans soon. They believe that we may be able to freeze ourselves before we die, and then be revived by future generations. They believe that we may be able to upload our consciousness to a computer and live as a simulation. They believe that computers are going to become super powerful and completely take over the world.

Personally, I don't understand these things well enough to really speak to their plausibility. My impression so far is that rationalists have very good reasons for believing what they believe, and that they're probably right. But perhaps you don't share this impression. Perhaps you think those conclusions are wacky and ridiculous. Even if you think this, it's still possible that the techniques may be useful to you, right? It's possible that rationalists have misapplied the techniques in some ways, but that if you learn the techniques and add them to your arsenal, they'll help you level up. Consider this before writing rationality off as wacky.

Overview

So, what does rationality teach you? Here's my overview:

  • The difference between reality, and our models of reality (see map vs. territory).
  • That things are there components. Airplanes are made up of quarks. "Airplane" is a concept we created to model reality.
  • To think in gray. To say, "I sense that x is true" or "I'm pretty sure that x is true" instead of "X is true".
  • To update your beliefs incrementally. To say, "I still don't think X is true, but now that you've showed me Y, I'm somewhat less confident." On the other hand, a Black And White Thinker would say, "Eh, even though you showed me Y, I still just don't think X is true."
  • How much we should actually update our beliefs when we come across a new observation. A little? A lot? Bayes' theorem has the answers. It is a fundamental component of rationality.
  • That science, as an institution, prevents you from updating your beliefs quickly enough. Why? Because it requires a lot of good data before you're allowed to update your beliefs at all. Even just a little bit. Of course you shouldn't update too much with bad data, but you should still nudge your beliefs a bit in the direction that the data point toward.
  • To make your beliefs about things that are actually observable. Think: if a tree falls in a forest and no one hears it, does it make a sound? Adding this technique to your arsenal will help you make sense of a lot of philosophical dilemmas.
  • To make decisions based on consequences. To distinguish between your end goal, and the stepping stones you must pass on your way there. People often forget what it is that they are actually pursuing, and get tricked into pursuing the stepping stones alone. Ex. getting too caught up moving up the career ladder.
  • How evolution really works, and how it helps explain why we are the way we are today. Hint: it's slow and stupid.
  • How quantum physics really works.
  • How words can be wrong.
  • Utilitarian ethics.
  • That you have A LOT of biases. And that by understanding them, you could side-step the pain that they would otherwise have caused you.
  • Similarly, that you have A LOT of "failure modes", and that by understanding them, you could side-step a lot of the pain that they would otherwise have caused you.
  • Lots of healthy mindsets you should take. For example:
    • Tsuyoku Naratai - "I want to become stronger!"
    • Notice when you're confused.
    • Recognize that being wrong is exciting, and something you should embrace - it means you are about to learn something new and level up!
    • Don't just believe the opposite of what your stupid opponent believes out of frustration and spite. Sometimes they're right for the wrong reasons. Sometimes there's a third alternative you're not considering.
    • To give something a fair chance, be sure to think about it for five minutes by the clock.
    • When you're wrong, scream "OOPS!". That way, you could just move on in the right direction immediately. Don't just make minor concessions and rationalize why you were only partially wrong.
    • Don't be content with just trying. You'll give up too early if you do that.
    • "Impossible" things are often not actually impossible. Consider how impossible wireless communication would seem to someone who lived 500 years ago. Try studying something for a year or five before you claim that it is impossible.
    • Don't say things to sound cool, say them because they're true. Don't be overly humble. Don't try to sound wise by being overly neutral and cautious.
    • "Mere reality" is actually pretty awesome. You could vibrate air molecules in an extremely, extremely precise way, such that you could take the contents of your mind and put them inside another persons mind? What???? Yeah. It's called talking.
    • Shut up and calculate. Sometimes things aren't intuitive, and you just have to trust the math.
    • It doesn't matter how good you are relative to others, it matters how good you are in an absolute sense. Reality doesn't grade you on a curve.

Sound interesting? Good! It is!

Eliezer wrote about all of this stuff in bite sized blog posts. About one per day. He claims it helps him write faster as opposed to writing one big book. Originally, the collection of posts were referred to as The Sequences, and were organized into categories. More recently, the posts were refined and brought together into a book - Rationality: From AI to Zombies.

Personally, I believe the writing is dense and difficult to follow. Things like AI are often used as examples in places where a more accessible example could have been used instead. Eliezer himself confesses that he needs to "aim lower". Still, the content is awesome, insightful, and useful, so if you could make your way past some of the less clear explanations, I think you have a lot to gain. Personally, I find the Wiki and the article summaries to be incredibly useful. There's also HPMOR - a fanfic Eliezer wrote to describe the teachings of rationally in a more accessible way.

Gaps

So far, there hasn't been enough of a focus on applying rationality to help you win in everyday life. Instead, it's been focusing on solving big, difficult, theoretical problems. Eliezer mentions this in the preface of Rationality: From AI to Zombies. Developing the more practical, applied part of The Art is definitely something that needs to be done.

Learning how to rationally work in groups is another thing that really needs to be done. Unfortunately, rationalists aren't particularly good at working together. Yet.

Community

From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey). Readers live throughout the globe, and tend to be come from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc. crowd. There are also a lot of effective altruists - people who try to do good for the world, and who try to do so as efficiently as possible. See the wiki's FAQ for results of these surveys.

There are meet-ups in many cities, and in many countries. Berkeley is considered to be the "hub". See How to Run a Successful LessWrong Meetup for a sense of what these meet-ups are like. Additionally, there is a Slack group, and an online study hall. Both are pretty active.

Community members mostly agree with the material described in The Sequences. This common jumping off point makes communication smoother and more productive. And often more fulfilling.

The culture amongst LessWrongians is something that may take some getting used to. Community members tend to:

  • Be polyamorous.
  • Drink Soylent.
  • Communicate explicitly. Eg. "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
  • Be a bit socially awkward (about 1/4 are on the autism spectrum).
  • Use lots of odd expressions.

In addition... they're totally awesome! In my experience, I've found them to be particularly, caring, altruistic, empathetic, open-minded, good at communicating, humble, intelligent, interesting, reasonable, hard working, respectful and honest. Those are the kinds of people I'd like to spend my time amongst.

Diaspora

LessWrong isn't nearly as active as it used to be. In "the golden era", Eliezer along with a group of other core contributors would post insightful things many times each week. Now, these core contributors have fled to work on their own projects and do their own things. There is much less posting on lesswrong.com than there used to be, but there is still some. And there is still related activity elsewhere. See the wiki's FAQ for more.

Related Organizations

MIRI - Tries to make sure AI is nice to humans.

CFAR - Runs workshops that focuses on being useful to people in their everyday lives.


Meta:

Of course, I may have misunderstood certain things. Ex. I don't feel that I have a great grasp on bayesianism vs. science. If so, please let me know.

Note: in some places, I exaggerated slightly for the sake of a smoother narrative. I don't feel that the exaggerations interfere with the spirit of the points made (DH6). If you disagree, please let me know by commenting.

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 11:41 AM

Having spent years thinking about this and having the opportunity to talk with open minded, intelligent, successful people in social groups, extended family etc. I concluded that most explicit discussion of the value of inquiring into values and methods (scope sensitivity and epistemological rigor being two of the major threads of what applied rationality looks like) just works incredibly rarely, and only then if there is strong existing interest.

Taking ideas seriously and trusting your own reasoning methods as a filter is a dangerous, high variance move that most people are correct to shy away from. My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

I eventually realized that what I was really communicating to people's system 1 was something like "Hey, you know those methods of judgment like proxy measures of legitimacy and mimesis that have granted you a life you like and that you want to remain stable? Those are bullshit, throw them away and start using these new methods of judgment advocated by a bunch of people who aren't leading lives resembling the one you are optimizing for."

This has not resulted in many sales. It is unrealistic to expect to convert a significant fraction of the tribe to shamanism.

Maybe a side note, but it's not obvious to me that

When you are losing you increase variance. When you are winning you decrease it.

is in general true, whether normatively or empirically.

Earlier today, it occurred to me that the rationalist community might be accurately characterized as "a support group for high IQ people". This seems concordant with your observations.

I'd like to emphasise that in this context, "high IQ" means higher than Mensa level (which is what most people would probably imagine when you say "high IQ").

I used to regularly attend Mensa meetups, and now I regularly attend LW meetups, and seems to me that the difference between LW and Mensa is about the same as the difference between Mensa and the normies. This doesn't mean the whole difference is about IQ, but there seems to be a significant intelligence component anyway.

As for the comment that it's difficult to get people to be interested, that seems very true to me, and it's good to get the data of your vast experience with this.

A separate question is how we can best attempt to get people to be interested. You commented on the failure you experienced with the "throw your techniques away, these ones are better" approach. That seems like a good point. I sense that my message takes that approach too strongly and could be improved.

I'm interested in hearing about anything you've found to be particularly effective.

My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

There's also the issue of having plenty of spare time·

My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

This also applies to me

He's really, really smart.

This is the kind of phrasing that usually costs more to say than you can purchase with it. Anyone who is themselves really, really smart is going to raise hackles at this kind of talk; and is going to want strong evidence moreover ( and since a smart person would independently form the same judgement about Yudkowsky, if it is correct, you can safely just supply the evidence without the attached value judgment ).

Fiction authors have a fairly robust rule of thumb: show, don't tell. Especially don't tell me what judgement to form. I'd tack on this: don't negotiate. Haggling with a person over their impressions of a group of other people with suggestions like it's still possible that the techniques may be useful to you, right? immediately inspires suspicion in anyone with any sort of disposition to scepticism. Bartering _may_s simultaneously creates the impression of personal uncertainty and inability to demonstrate while coupling it to the obvious fact that this person wants me to form a certain judgement.

If I were to introduce a stranger to LessWrong I'd straightforwardly tell them what it is: it's where people attracted to STEM come go to debate and discuss mostly STEM-related ( and generally academic ) topics; with a heavy bias towards topics that are in a the twilight zone between sci-fi and feasible scientific reality, also with a marked tendency for employing a set of tools and techniques of thought derived from studying cognitive science and an associated tendency to frame discussions in the language associated with those tools.

Thanks for calling this out. I was imagining explaining it to a friend or family member who is at least somewhat charitable and trusting of my judgement. In that case, I expect them to not raise hackles, and I think it's useful to communicate that I think the authors are particularly smart.

However, if this were something that were posted on Less Wrong's About page, for example, I could definitely see how this would turn newcomers away, and I agree with you. Self-promoting as "really, really smart" definitely does seem like something that turns people off and makes them skeptical.

Thank you for being gracious about accepting the criticism.

He never finished high school, but self taught himself a bunch of stuff.

Is this really the best second sentence to have? This, plus a few pieces later (like saying LW is fringe-y and cult-y before calling it mostly about noncontroversial things) seems like you're optimizing around an objection you're imagining the listener has ("isn't that place Yudkowsky's cult?"), which causes them to think that even if they weren't already.

That is, the basic structure here is something like:

  1. Founders

  2. Broad description of beliefs

  3. Detailed description of beliefs

  4. Problems

  5. Community

I suspect you're better off with a structure like:

  1. We know a lot more about thinking now than we did in the past, and it seems like thinking about thinking has multiplicative effects. This is especially important today, given how much work is knowledge work.

  2. There's a cluster of people interested in that who gathered around a clear explanation of the sort of worldview you'd build today as a cognitive psychologist and a computer programmer, that you couldn't have built in the past but is built on the past. That is, the fruit of lots of different intellectual traditions have fertilized the roots of this one.

  3. As an example of this, a core concept, "the map is not the territory," comes from General Semantics through Hayakawa. What it means is that we have mental models of external reality that, from the inside, seem to be reality, but are different, just like Google Maps might look a lot like the surface of the Earth but it isn't. This sort of mental separation between beliefs and reality allows for a grounded understanding of the relationships between one's beliefs and reality, which has lots of useful downstream effects.

  4. But that's just one out of many concepts; the really cool thing about the rationality community is that when everyone has the same language (and underlying concepts), they can talk much faster about much more interesting things, cutting quickly to the heart of matters and expanding the frontiers of understanding. Lots of dumb arguments just don't happen, because everyone knows how to avoid them.

I get the impression that a lot of people start off with a feeling that it's weird and cult-y. For that reason, I feel it's important to address it and communicate that "actually, rationality is normal". If you didn't already find it to be weird (and wouldn't have come to find it weird after some initial investigation), my intuition is that such a forewarning wouldn't lead you to consider it weird, and thus has a minimal downside. I feel somewhat confident about that intuition, but not too confident.

This would be an interesting thing to test though. And I look forward to updating my beliefs based on what the experiences and intuitions of others are regarding this.

"actually, X" is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they've exhaustively studied a certain topic ) - but what the average person hears is the "you don't know what you're talking about" half of the implication which makes them deaf to the "I do know what I'm talking about" half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust their judgements accordingly.

Here's an interesting exercise - find anyone in the business of persuasion ( a lawyer, a salesman, a con artist ) and see how often you hear them say things like "no, actually..." ( or how often you hear them not saying these things ).

My impression: a major issue is that other people get the idea that LessWrong comes from a few people preaching their ideas, when in reality, it's people who mostly preach the ideas that have been discovered by and are widely agreed upon by academic experts. Just saying, "it comes from academics" seems to not directly address this major issue directly enough.

That said, I see what you mean about "actually, X" being a pattern that may lead people to instinctively argue the other way. So I see that there is a cost, but my impression is that the cost doesn't outweigh the benefit that comes with directly addressing a major concern that others have. For most audiences; there are certainly some less charitable audiences who need to be approached more gently.

I'd consider my confidence in this to be moderate. Getting your data point has lead to me shift downwards a bit.

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real. Imagine a Scientologist offering to explain to you why Scientology isn't a cult.

Of the people I know of who are outright hostile to LW, it's mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can't exactly say that those things don't exist.

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real.

I could see some people responding that way. But I could see others responding with, "oh, ok - that makes sense". Or maybe, "hm, I can't tell whether this is legit - let me look into it further". There are lots of citations and references in the LessWrong writings, so it's hard to argue with the fact that it's heavily based off of existing science.

Still, there is the risk of some people just responding with, "Jeez, this guy is getting defensive already. I'm skeptical. This LessWrong stuff is not for me." I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I'm starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it.

The good thing is that user research could totally answer this question. Maybe that'd be a good activity for a meet-up group or something. Maybe I'll give it a go.

If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust their judgements accordingly.

Behold LW!

:-)

Is this really the best second sentence to have?

Hm, probably not. Seems unnecessarily to risk giving an even "cult-ier" impression. Also seems worthwhile to be more specific about why I claim that he's smart. Changed, thanks.

That's... pretty bad.

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

I'd recommend to nuke this text from orbit and start anew.

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

Well, glad you didn't choose the first option, then.

I seldom agree with Lumifer but this comment is right on track. Sorry OP, I am not sure what kind of Outsider you are thinking of, but I am having trouble of thinking of anyone outside LW for whom this way of framing it would be at all appealing.

[-][anonymous]7y20

Something that seems relevant is this attempt I made a while back as an friendly intro to rationality.

I think that you might be trying to get across a lot of information here. I think this might be fine in certain cases for conversation, but I definitely wouldn't recommend trying to send this as-is to people. Also, a lot of the mention of community norms / etc. seem like potential turn-offs.

What may be of interest is strategies that have worked for me in piquing people's interest:

  • Starting with cognitive psychology. People seem naturally interested to this area of study, and if you can present your group as one that has cool info worth delving into, they get interested. If you then follow up with this idea of "mental strategies" that can boost your thinking, you can move into basic rationality from there.

  • For AI risk, focusing on acknowledging the media strawmans from pop culture and trying to focus on how poor specification can cause problems. (EX: pointing to how code does what it says and not what you mean).

From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey).

We also have a survey for 2016: http://lesswrong.com/lw/nkw/2016_lesswrong_diaspora_survey_results/

There's this guy named Eliezer Yudkowsky. He's really, really smart.

Who told you that?