I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacythe affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.

While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.

Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."

New Comment
80 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I like your proposal. I note, in addition, that the best introduction is probably linking people to particular articles that you think they may find interesting, instead of the main page (which is what someone will find if you just tell them the name).

9Gram_Stone
I strongly agree with this. They didn't become a regular user, but I have a friend who views LW positively because they found Pain and gain motivation very helpful. (Funnily enough that's PJ Eby's work.) My friend wanted to be a comedian and would always punish himself for not working harder at it; he stopped punishing himself and now he has notebooks full of comedy bits written down. I find that typical LW users are all about epistemic rationality, but most other people don't really care until they see the benefits of some instrumental rationality. Hopefully I can indoctrinate him further at a later date. >:D

I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers. While it's not strictly a prequisite for learning rationality, it certainly is for starting in medias res.

The current approach is a good selector for dividing the chaff (well educated because that's what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).

HPMOR instead, maybe?

6John_Maxwell
Agree. NancyLebovitz's posts points at something true: there should be more resources like HPMOR for "regular people" to increase their level of rationality. That's part of the reason I'm excited about groups like Intentional Insights that are working on this. But I think "dumbing down" Less Wrong is a mistake. You want to segment your audience. Less Wrong is for the segment that gets curious when they see unfamiliar technical terms.

I think that if I didn't know anything about LessWrong, the first version would be the one that would be more likely to attract my attention. There are a lot of websites that make vague promises that reading them will improve your life. The first version at least gives examples of what exactly this website is about and how exactly it is supposed to be helpful. While the second sentence repels no one, it seems unlikely that it could pique anyone's curiosity. That might not matter when you personally recommend LW to someone, since your recommendation would presumably be enough to make them think that there is some content behind the vague introduction. It seems to me that we might even have an interesting situation when vague introduction might work better on people who were recommended to check out LessWrong by other people (since it wouldn't repel them), but I hypothesize that concrete introduction would be better for those who encountered LW accidentally, because while most people wouldn't read past introduction either way, it is the first version that seems to be the one that is likely to catch at least someone's attention.

By the way, I am not sure if having what is basically About page as a frontpage is the best way to introduce new readers to LW. Few blogs do that, most put the most recent posts on the frontpage instead. That makes them look less static.

On a related topic, someone just got really confused by the fact that the infobox at the top of Discussion says

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

And then the person naturally assumed that an article that had actually been posted in Main could be found from browsing Discussion.

5NancyLebovitz
I brought up the problem with the description of the discussion board and got a bunch of karma, but nothing happened. I've contacted tech about my being able to change site text-- I'll try again without the assumption that I'd be making the changes.

"Really? You can think hurricanes out of existence"? Because that's what people will think when you say you make major disasters less likely. "Rogue AI" is a very non-central example of "disaster".

Disagreed.

The old introduction may be obscure, but at least it is informative. A visitor can follow the links, read 5 minutes about biases (if it is required) and then he gets some understanding of what this site is actually about. The new version is much more vague. Who would like to think more clearly, improve their lives and make major disasters less likely? Well, pretty much everybody.

I don't think that minor changes, like rephrasing introduction or adding disclaimers about criticisms to FAQ will have any noticeable effect. To attract significantly mo... (read more)

4roryokane
I can’t tell from Nancy’s anecdote, but it is possible that her friend couldn’t follow the links on the home page, because she was actually on the Google search page: The sentence’s wording without links is important because Google quotes it in plaintext.
-1Pancho_Iba
Well, the sentence's wording without links is important, but maybe if your friend suggests a site, you can try not being so lazy as not clicking the link to the page.

How I see it, LessWrong is a community of people who try to avoid stupidity, including those kinds of stupidity that are very popular among highly intelligent people.

Now the problem is how to write it in a less offensive way. Also, how to write it so that outsiders will not interpret "avoiding stupidity" as "avoiding the people and opinions from the opposite tribe", because this is what most others websites would mean if they would choose such words.

Maybe the problem is that all catchy descriptions are already politicized.

This said, I a... (read more)

627chaos
Something like: "intelligent people who've noticed they sometimes make very stupid mistakes, and are working to develop skills that prevent such mistakes from happening" seems to be approaching a good answer.

Fixing the description to not require a lot of background information won't help if you don't fix the content to also not require a lot of background information.

And honestly I'm not sure the site is right for somebody without a lot of background information. Some of the material here is way too persuasive without giving any implication of the existence of counterarguments or criticisms - for example, anytime Eliezer mentions the Stanford Prison Experiment anywhere. And the reinforcement system here encourages groupthink of the ugliest sort. (I am manip... (read more)

0Lumifer
People below that level are unlikely to find LW appealing. They might poke their nose in, but they'll leave quickly.
[-]tim40

This is not directly related to the wording of the introduction, but to the accessibility of the homepage to new users.

I have been an avid lurker/reader of LW since the beginning. Over the past year or two, I have almost exclusively read the discussion forum due to it's high turnover rate and greater density of "bite-size" ideas that tend to require less time to process and understand than promoted posts.

Only recently I've realized that a noticeable part of the reason I immediately click "Discussion" after navigating to my http://lesswr... (read more)

If someone is going to turn away at the first sight an unknown term, then they have no chance in lasting here (I mean, imagine what'll happen when they see the Sequences).

I wonder what other negative responses we have had from friends we (tried to) introduce to LessWrong. And what can be learned from that.

I will start with one expererience: A friend which I tried to introduce was sceptical about the (hidden) agenda of LW. He pried what the purpose was and how well-founded the content was. His impression was one of superficiality. He found some physical and philosophical posts to be off the mark (being a well-read physicist). He didn't say cult, but I guess he suspected manipulation. And tried to locate the ends of that. We was interested in the topics themselves but the content just didn't match up.

8query
Several people I've tried to introduce the site to have been turned off at the title: they interpreted "Less Wrong" to mean something akin to "We are a community of arrogant internet atheists who are Less Wrong than all of those stupid people". I believe the potshots at religion in major articles also fed into this view for at least one person; he was interested in some of the articles, but used those potshots and selected comments to argue that the community was terrible. He now is interested in many of the ideas, but is semi-actively opposed to the community (e.g. he'll say derogatory things about it if it's brought up.) In it's current form, I think the site is dangerous to share with anyone who is (a) religious or sensitive to poking fun at religion, or (b) cynical with regard to "atheist communities" which is what LessWrong is at risk of pattern matching as. I'd love to have a good resource for sharing rationality with those friends, although for some of them the well of "rationality" is poisoned by their negative associations with Less Wrong.
0HungryHobo
The site can be just a wee bit culty sometimes, by which I mean it pattern matches in a non trivial way to some other cult-like organizations, not that it's literally a cult. I could list off some of them but then we'd get into the game where people pick each element in isolation and argue that it is not 100% correlated with actual cults. It's not that less wrong ticks all the cult checkboxes or that every cult checkbox is conclusive but it does check more than most organization and enough to give it a vibe that makes people wary.

"Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."

I wouldn't reference the major disasters, but I would reference some particular means by which we're trying to think more clearly.

Less Wrong is a community for people who try to find and apply means to think more clearly (i.e., Less Wrong, get it?), with major areas of interest including cognitive biases, bayesian probability theory, causal inference, decision theory, mor

... (read more)
3NancyLebovitz
Preventing existential risk is part of what this site is about. Do you think it shouldn't be mentioned at all, or do you think it should be described some other way?
6OrphanWilde
Have you prevented any existential risks? Has anyone here? Has the site? Is that something users actually interact with here? In what way is preventing existential risk relevant to what a user coming here is looking at, seeing, interacting with, or doing? And what introduction would be necessary to explain why Less Wrong, as a community, is congratulating itself for something it hasn't done yet, and which no user will interact with or contribute to in a meaningful way as a part of interacting with Less Wrong?
3NancyLebovitz
There are people here who are working on preventing UAI-- I'm not sure they're right (I have my doubts about provable Friendliness), but it's definitely part of the history of and purpose for the the site. While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such-- and to work seriously on teaching how to notice biases and give them up. Most people aren't interested in existential risk, but some of the people who are interested in the site obviously are.
0OrphanWilde
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about? Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
0Viliam
When it comes to existential threats, humanity has already cried wolf too many times. Most of the time the explanation of the threat was completely stupid, but nonetheless, most people are already inoculated against this kind of message in general.
0John_Maxwell
I think mentioning it early on sends a bad signal. Most groups that talk about the end of the world don't have very much credibility among skeptical people, so if telling people we talk about the end of the world is one of the first pieces of evidence we give them about us, they're liable to update on that evidence and decide their time is better spent reading other sites. I'd be OK with an offhand link to "existential risks" somewhere in the second half of the homepage text, but putting it in the second sentence is a mistake, in my view.
2Jiro
"What they emphasize about themselves doesn't match their priorities" also sends a bad signal leading to loss of credibility. This may fall in the "you can't polish a turd" category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as "we need to change how we present talking about the end of the world" can't help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.
0John_Maxwell
Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they're not driven away.
0Jiro
Well, there's a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don't have expertise in the subject they're talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it's crackpots more towards the latter end of the scale.
0John_Maxwell
Sure. And insofar as it's easy for us, we should do our best to avoid being classified as crackpots of the first type :) Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.
0buybuydandavis
Others have to decide on branding and identity, but I consider MIRI and the the Future of Humanity as having very different core missions than LessWrong, so adding those missions in a description of LW muddies the presentation, particularly for outreach material. To the point, I think "we're saving the world from Unfriendly AI" is not effective general outreach for LessWrong's core mission, and beating around the bush with "existential threats" would elicit a "Huh? What?" in most readers. And is there really much going on about other existential threats on LW? Nuclear proliferation? Biological warfare? Asteroid collisions? I don't think that existential risk really a part of the LessWrong Blog/Forum/Wiki site mission, it's just one of the particular areas of interest of many here, like effective altruism, or Reactionary politics. CFAR makes a good institutional match to the Blog/Forum/Wiki of LessWrong, with the mission of developing, delivering, and testing? training to the end of becoming LessWrong, focusing on the same subject areas and influences as LessWrong itself.

Data point: I started reading the Sequences because a blog which I was reading (which I do not remember the name of) linked to a post in the Fun Theory sequence, which i felt compelled to promptly read the entirety of.

I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.

Note that that's not the first sentence on the homepage. The first sentence on the homepage is

In the past four decades, behavioral economists and cognitive psychologists have discovered many cogn

... (read more)
4roryokane
I’m not sure if reformatting the home page would have made any difference for Nancy’s friend. Was she on the home page, or the Google search page for “less wrong”? Google quotes that sentence out of context, so its wording is especially important.

Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."

I like this.

Who has the power to edit the site?

I think changing "fix" to "improve" is a good idea. I also thing changing the first sentence to:

"Over the last four decades, cognitive psychologists and behavioral economists have discovered many cognitive biases human brains fall prey to when thinking and deciding."

improves the flow by spacing out the word "cognitive" would be a good idea. Footnoting behavioral economists is likely a good idea also as a lot of people are unfamiliar with that concept.

[-][anonymous]10

I must confess I don't like the term "rationalism" as it has Vulcan-Hollywood-Rationalism connotations. In the past, this term was often used to describe attitudes that are highly impractical and ideological. More in PDF On the "Oakeshottian scale" LW-Rationalism is far closer to the pragmatic attitude Oakaeshott endorses than to that type of quasi-rationalism he criticizes.

If I had a time machine I would probably try to talk Eliezer into choosing another name. What name that would be I am not so sure, perhaps Pragmatism - after all Pe... (read more)

0SanguineEmpiricist
I think if we just added a table for synonyms and have and a few more we would be good.
0ZacHirschman
I think of it as "improvematism." Maybe "improvementism" would sound more serious.

Below is the current text (without links). I agree your sentence is helpful. Do you want to add it to the current page or replace the bias sentence?

In the past four decades, behavioral economists and cognitive psychologists have discovered many cognitive biases human brains fall prey to when thinking and deciding.

Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking.

Bayesian reasoning offers a way to improve on

... (read more)
3Lumifer
This is written with highly self-reflective math geeks as the target audience. The first inclination of normal people would be to run away. Lemme demonstrate -- where is my Normal Neurotypical Person hat? -- aha, here it is: "in order to fix their own thinking" -- this site is for broken people? I'm not broken! "evidence disconfirming those models" -- people don't talk like that. Is "disconfirming" even a word? "prior probabilities in Bayes' theorem" -- it's all about math?? Run away!! Is there a marketing major in the house? X-)
7[anonymous]
A mass market written message would almost immediately turn off this site's core audience... I have two sites that have decently written, high converting copy. Both of them have been described by LWers as having great content , but too "salesy". I think a better approach might to be just go with less copy, but more design that conveys the message. Think apple, but selling "smart and winning" instead of "hip and cutting edge". I might try to put together a cool looking landing page that took this approach if there was enough interest.
4NancyLebovitz
What are the other two sites?
2[anonymous]
Newgradblueprint.com selfmaderenegade.net
4Lumifer
Maybe, but we don't have to stick to extremes and go from "math geek" directly to "used car salesman" :-)
0[anonymous]
Then who exactly is your target audience? One of the core ideas of effective marketing is that you craft a message that excites your target - and as a result, that that message will necessarily turn away others who are not in that target audience. If we wanted to go mass market, then we should craft a landing page for that market. If we wanted to go for math geeks, we should keep the site as it is. If we wanted to attract neither, we could go to a middle ground.
0Lumifer
I like JMIV's suggestions here.
1SilentCal
I'm afraid design has the same problems as copy--I, at least, find the design of your sites below as off-puttingly 'salesy' as the copy. I think we might be dealing with a hipsterish phenomenom of acquired aversion to anything with mass-market effectiveness, which I'm not sure how to deal with.
0[anonymous]
I wouldn't use similar design to those two sites - It doesn't fit the brand or message of lesswrong. Like I said, I envision something more like the large white spaces and slick graphics of Apple.
0[anonymous]
It feels to salesy to me too and now I would like to understand this feeling better, namely why some people like and some dislike "salesy" things. I think the idea is that I am used to judging the value of things for myself based on the technical facts. So a message like "THIS IS A VERY GOOD PRODUCT FOR YOU" is a turn-off. It looks like the advertisement is making a value judgement instead of letting me make it. Generally I respond best to advertisements that are strictly factual, even when the facts are obviously trying to be impressive. A "this car can go faster than 400 km/h!!!" would be effective on me, because it is a fact, not a value, and it is leaving me to decide if I value it. So it seems, we who dislike "salesy" things generally dislike if others seem to be pushing value judgements on us. We prefer facts. Of course, facts can be grouped and displayed just as manipulatively, but in this kind of manipulation you are left more room, more freedom to decide on whether you care and thus how you value it. Since almost everybody I know would dislike your sites and salesy things in general, and for me this is the normal, now I need to form a hypothesis how "normal" people, who like them, think. I think "normal people" are somehow okay with others pushing value judgements on them. For example, unlike me, they do not get annoyed if people say "Metallica is really great!" Unlike me, they do not reply "you should just say you really like Metallica". They are not bothered by others communicating value judgements, not even if it is in a pushy way. Which suggests they are really secure in their own value judgements :)
2[anonymous]
I think you yourself are pushing value judgments on these normal people (which isn't bad BTW, pushing value judgements on others is a cornerstone of democracy) - but it is important to recognize that this is what's happening (I think). One way I like to think about people is on a sliding scale from an asethetic preference for System 1 (commonly called left brained or intuitive) to an aesthetic preference for System 2 commonly called right brained or logical). Most people in LW fall quite few standard deviations towards the System 2 side. Despite all the talk about "straw vlucans",it seems to me there's a huge blindspot on LW for alleving that System 2 thinking is good, and System 1 thinking is bad - instead of realizing that each have their own strengths and weaknesses, and their own beauty. I tend to fall more towards the center, and tend to see congruency as beautiful - when your intuitions and logic line up. Sometimes I'll let my System 2 drive the congruency, such as when I decide on the best habit, and then read lots of stories and brainwash myself to be excited for that habit. Sometimes I'll let my system 1 drive, such as realizing that I would hate living for other people, so consciously choosing not to adopt all the facets of effective altruism. I tend to be a BIT more more towards aeshetic preference for System 2 - so I do get that inherent revulsion to for instance, using your peer group as a heuristic for what you like - but I also understand the opposite aesthetic preference to EG not overthinking things.
2[anonymous]
Yes, but I intuitively dislike "salesy" stuff and then turn on System 2, put my revulsion aside and investigate if it still may be an objectively good product. To my System 1 it feels cheezy, dishonest and so on. Let me propose another theory. It is all about being a spergy (Asperger) autistic misanthropic (so not only low social skills but more like disliking people) nerd geek who has fuck for social life and dislikes it in general. Social life is simply not honest. You are supposed to smile at people even though you are not always glad to see them yet you must say glad to see ya, and you must ask how are you even if you don't care at all and when they ask the same you cannot give a true answer but must put up a smile and say great and so on. This disgusts people like me on a System 1 level, our System 1 is not wired for socialization and thus dislike the smell of untruth. It was very very hard for me to learn that people do not always mean everything literally they say. Often just saying things because it is expected to say those. And I did not like it one bit, I valued truth (not LW type super Bayesian truth, just expressing how you actually feel ) over kindness or conformity on a System 1 level. And "salesy" stuff feels like all this socialization - but on steroids. For example salespeople in person act as as super-extroverted, and act like liking people very much. And advertisements feel like that too.
0[anonymous]
Note that "cheezy" is an entirely aesthetic term... this is what I'm talking about in terms of blind spots. Dishonest is not necessarily an aesthetic term... but in the sense you're using it, it does seem to be more a value judgement than an evaluation that means "advertisements are lies because they tell you things that aren't true." I agree with the rest of your analysis - one of the things I almost wrote about in my original answer is the relation between your aesthetic preferences and your social skills and desire for social interaction. hat you call "spergy" I call "an aesthetic preference for system 2."
0[anonymous]
Wait, I don't understand the relationship between system 2 preference and social interaction desire / skill. You saying social interaction is almost fully System 1? Come to think of it, I too see a correlation between them, but have not seen any sort of a theory that connects them causally.
0[anonymous]
I don't think social interaction is fully system one - there's a lot of political games and navigating relationships that are under the domain of system 2. But the act of socializing, in the moment, I see as largely system 1 driven. A lot of this comes from a place where I didn't understand social interactions at all... As I came to be able to interact more normally, one of the key things I learned is that in most cases of social interaction, the interaction is not about a logical exchange of information or ideas - it's taking place on an emotional level. System 1 is built for social interaction. It helps us with the tiny subcommunications that reveal things about our emotions and status, and it's built to read other's subcommunications that communicate the same things, and give us feelings about other people based on those subcommunications. One of the reasons that people have "aspy" behavior is either that this part of their system 1 is not very powerful - they have trouble empathizing, reading those social cues, etc. AND/OR, they simply dislike that form of communication - they have an aesthetic preference for logical, factual conversations, instead of the empathy based emotional exchange that most "normal" interactions hinge on.
0Lumifer
Easy. Forming judgements is hard. Evaluating facts and converting them to your preferences takes time, energy, and some qualifications. Taking the ready-made value judgment someone is offering you with a ribbon on top is a low-effort path.
0[anonymous]
No, this is too self-serving. The most realistic interpretation is usually the one that does not make you feel better than others. I would propose this: http://lesswrong.com/lw/m7l/we_should_introduce_ourselves_differently/ce51
0Lumifer
Why is it self-serving? I do the same thing myself if, say, I need to buy something I don't care much about. Let's say I need to buy X, I run a quick Google search, so... people say they like brand Y, second review? they still like brand Y, OK, whatever, done. Quick and easy.
4NancyLebovitz
"in order to fix their own thinking" is worse than that-- "fix" is just plain wrong, since it implies a permanent repair. "Improve" would be better as well as less hostile. I'm not sure that most people would recognize "prior probabilities in Bayes' theorem" as math-- it might just come through as way too technical to be worth the trouble.
3VoiceOfRa
Is that a bug or a feature?
3Luke_A_Somers
Bug, definitely. Why would you even ask that? Even if normal people can't contribute, we would definitely like everyone to think more clearly.
0Lumifer
Eternal September is a thing...
2Luke_A_Somers
Sure, but what was the saying? "The most likely scaling problem is that you aren't going to have any scaling problems."
1Lumifer
So why would you ask any questions about scaling, right? X-/
0John_Maxwell
It sounds like you're trying to reapply a principle from software development to online community building? LW has scaled. Lots of people read this site. Our last survey got over 1000 respondents. (This may not sound like much compared to e.g. reddit, but there are lots of dead little online communities that no one sees because they're dead; being dead is the default state for an online community. The relevant norm is /r/LessWrongLounge, not reddit as a whole.)
0Luke_A_Somers
Well, the scaling problem in software development was specifically about getting a lot of users, so it seems relevant. Anyway, if we have scaled, and we're not having Eternal September, that suggests we're doing something right. Is that thing we're doing 'right'... keeping out the normal people? If so, that's not so great. It's important to get through to normal people. My (inadequately stated) point was, we're much more likely to have problems attracting enough normal people than having trouble dealing with a flood of them. A little change to the header isn't going to make that big a difference.
2Lumifer
Eternal September is normal people. (It's a bit more selective than Soylent Green).
0Luke_A_Somers
No, Eternal September is being overwhelmed by clueless noobs. If the pace of growth is moderate enough that newbies acculturate before their population dominates, then you don't enter that condition.
0Lumifer
I think we're in broad agreement :-D but let me stress the Eternal part...
0John_Maxwell
I think the value of attracting users to LW has a power law distribution. Both Luke Muehlhauser and Nate Soares were "discovered" through their writing for LW, and both went on to be MIRI's executive director. I think the core target audience for LW should be extremely intelligent people who, for one reason or another, haven't managed to find an IRL cluster of other extremely intelligent people to be friends with. (Obviously I welcome math grad students at Caltech who want to contribute, but I think they'll be substantially less motivated to find a community than someone of equivalent intellectual caliber who decided school was bullshit and dropped out at 17.) Given that, I think LW's marketing should be optimized for very smart people with finely tuned bullshit detectors who may be relatively uneducated but are probably budding autodidacts. (That's part of the reason I added prominent links to the best textbooks thread/Anki when I wrote the about page.)
0Lumifer
That sounds like an excellent approach. However it will make unhappy a bunch of people here who want to carry the torch of rationality into the masses and start raising waterlines :-/

Is anyone analysing the web logs of LessWrong to study how people are accessing the site?

I agree with what you said about how we introduce ourselves.

As for your possible improvement, I don't know if everyone here cares about the latter two points. But it seems that a lot do, and I'm not sure whether the amount of people are over the "threshold" where it makes sense to generalize.

Anyway, I've always felt pretty strongly that at its core, the goals of rationality are really simple and straightforward, and that it's something everyone should be interested in. At it's core, rationality is just about:

1) Getting what you want.

2) Being righ... (read more)

2Lumifer
If someone does that I would get very very sceptical. Credibility is the problem here -- self-help sites are a dime a dozen.
0Adam Zerner
True :/ My first thoughts on how it could be mitigated: * If you're referred to the site by someone you trust. * Signaling of quality. Ex. mentions of decision theory may signal quality to technically minded people. But there are other things that signal quality to "normal people". I'd have to think harder about it to come up with good examples. * Design and activity. I'm into startups, and after failing at my first one, I've realized how important these things are. Design is important in and of itself (as far as user experience goes), but it's also important because it signals quality. People often won't give things with poor design a chance, because they notice a correlation between design and quality. A similar point could be made about activity. Seeing lots of articles and comments serves as social proof of quality. * Proving quality. The "chicken-egg" problem of trustworthiness is encountered everywhere. But quality does seem to win (sort of). I sense that enough people do give stuff a shot such that quality does win out to some extent. If my thinking is on track here, then I think it'd follow that quick wins are important. It's important to have some "start here" articles that new readers could read and think, "Woah, this is really cool and useful! This definitely isn't one of those sketchy self-help websites. I'm not sure what the concentration of quality is on this site, but after reading these first two articles I think it's worth reading a few more to find out". Honestly, my impression is that the obstacles of Lost Purposes and not wanting to identify as a rationalist are notably bigger than the obstacle of credibility.
0ChristianKl
In general I don't think it makes sense to tell people about LW. It makes much more sense to link someone to an article on LW that's likely worth reading for that person. If the like what the find, maybe the read more.
[-][anonymous]00

Hmm. If you want people you know to get into LessWrong, don't undermine the value of your own enthusiasm. When I told my family about this site, I was all excited, like "I found this amazing new site, and I learned X, and they talk all about Y, which is so relevant to my life, and don't you hate when people do Z? Well they talk about that too!" Now my dad and little sister are hooked on the rationality ebook, even though they both generally read nothing more than fiction/fantasy. My little sister is fascinated by it despite still being a strong C... (read more)

0NancyLebovitz
I was thinking about summaries-- that would help a lot. It might also be possible to choose biases with more intuitive names, like the sunk cost fallacy.
0[anonymous]
Yeah... scope insensitivity is probably the best from the list, since it sounds intuitive but isn't commonly known like the sunk cost fallacy. Then again, completely new ones would make people more curious, and as long as there were summaries, they probably wouldn't be turned off by the unfamiliar names.