How do people feel like LW 2.0 is going? I'm impressed with the number and quality of posts that are being made, especially relative to the baseline of what LW 1.0 was like right before the relaunch. But I miss the lively discussions in the comments from the "Golden Age" of LW 1.0. Consider the Craft and Community sequence, written right when Eliezer transitioned from writing for Overcoming Bias to writing for Less Wrong. Here are six posts from that sequence which were especially memorable. On average, they have 179 comments. Looking at the "Curated Content" on the homepage right now, the 3 curated posts average only 19 comments, even though they've all been up for at least 2 months.

A big audience lets your posts to have a greater impact. (Have any posts from LW 2.0 generated new conceptual handles for the community like "the sanity waterline"? If not, maybe it's because they just aren't reaching a big enough audience.) Sometimes your audience generates interesting new ideas you hadn't thought of. And there's a virtuous cycle: People will write more comments if they have a justified expectation of comment readership and replies.

I think the biggest risk to LW 2.0 at this point might be that authors who invest in making posts find that they don't seem to be getting significant readership, making a significant impact, or generating useful feedback. There are a lot of people making posts right now, but there's a risk those people will drift away. If we can get the virtuous cycle going, that risk is lessened.

There's always the fear of Eternal September, but I think the rest of the internet has gotten more addictive since LW 1.0, so just being a website where longform essays are posted already gets you an audience that's selected for having a long attention span. And of course, countering Eternel September is a huge part of the motivation for the new voting system.

Some promotional ideas to consider:

  • Make use of the Less Wrong Facebook and Twitter feeds to highlight new content. (Is there still an RSS feed going? If anyone is still using RSS, it's probably the rationalist crowd.) The Twitter account hasn't posted since 2016; could probably benefit from a relaunch tweet.
  • Send a one-time email to old LW 1.0 users with high karma who haven't logged in since the relaunch, announcing LW 2.0's launch and reminding them that their high karma puts them at the top of the heap due to eigenkarma.
  • Get Scott Aaronson to mention the fact that LW 2.0 is a real-life instance of eigendemocracy in one of his "announcements" posts. The credit is his for inspiring the new voting system.
  • Back in the old days of LW, Louie Helm made use of a program where Google offered free Adwords credits to nonprofits in order to drive traffic to LW. I don't remember which keywords he used, but I can ask if you like.
  • Advertise on the sidebar of Scott Alexander's blog. I'm not sure whether he charges money to nonprofits which advertise or not.

Mods, let us know if you're looking for more promotional ideas and we can spend more time brainstorming.

New Comment
58 comments, sorted by Click to highlight new comments since:

I'm not sure LW understands why it succeeded in the early days. There's a lot of focus on important ideas like AI risk, but they don't seem to be the main reason. Consider this: two of the most popular LW authors, Eliezer and Scott, also write fiction and blog posts on random topics, and always get tons of comments no matter what they write about. It's almost as if people are attracted to writing style, and then rationalize that as newfound interest in the topic! To be fair, our community has made some attempts at developing writing style (/r/rational), but it seems half-hearted and mostly driven by attention. Hardly anyone seems interested in writing for its own sake.

As it happens, your writing style is pretty enjoyable: I find myself reading your posts even if I don't care much about the topic - in this case, popularity hacking. So maybe you don't have to think so much about popularity hacking? Just keep writing well, and the problem will solve itself.

(I could try to guess what makes a writing style enjoyable, but since my own style is flawed in many ways, that wouldn't be right. I've been meaning to work on my writing since forever, but there always seems to be something more important...)

As it happens, your writing style is pretty enjoyable

Thanks, I'm very flattered!

Have any posts from LW 2.0 generated new conceptual handles for the community like "the sanity waterline"?

As a datapoint, here's a few I've used a bunch of times in real life due to discussing them on LW (2.0). I've used most of these more than 20 times, and a few of them more like 2000 times.

Embedded Agency, Demon Threads, Slack, Combat vs Nurture Culture, Rationality Realism, Local Validity, Common Knowledge, Free Energy, Out to Get You, Fire Alarm, Robustness to Scale, Unrolling Social Metacognition, The Steering Problem, Goodhart's Law.

I was going to say 2000 times sounded like way too much, but making the guesstimates that means on average using "common knowledge" once every other day since it was published, and "out to get you" once every third day, and that does seem consistent with my experience hanging out with you (though of course with a fat tail of the distribution, using some concepts like 10 times in a single long hangout).

Actually in my head I was more counting the tail conversations (e.g. where I use a term 20-30 times), but you're right that the regular conversations will count for most of the area under the curve. Slack, Goodharting, Common Knowledge, are all ones I use quite frequently.

Goodhard law isn't a new concept and the term goodharting doesn't get used in the post about Goodhart's law that you linked and thus likely isn't responsible for it either.

I haven't seen the term that the post actually introduces Regressional Goodhart, Causal Goodhart, Extremal Goodhart or Adversarial Goodhart be used.

Yeah, I was not saying the posts invented the terms, I was saying they were responsible for my usage of them. I remember at the time reading the post Goodhart Taxonomy and not thinking it was very useful, but then repeatedly referring back to it a great deal in my conversations. I also ended up writing a post based on the four subtypes.

Added: Local Validity and Free Energy are two other examples that obviously weren't coined here, but the discussion here caused me to use quite a lot.

[-]rk30

Not Ben, but I have used X Goodhart more than 20 times (summing over all the Xs)

If I put on my startup hat, I hear this proposal as "Have you considered scaling your product by 10x?" A startup is essentially a product that can (and does) scale by multiple orders of magnitude to be useful to massive numbers of people, producing a significant value for the consumer population, and if you share attributes of a startup, it's a good question to ask yourself.

That said, many startups scale before their product is ready. I have had people boast to me about how much funding they've gotten for their startup, without giving me a story for how they think they can actually turn that funding into people using their product. Remember that time Medium fired 1/3rd of its staff. There are many stories of startups getting massive amounts of funding and then crashing. So you don't want to scale prematurely.

To pick something very concrete, one question you could as is "If I told you that LW had gotten 10x comments this quarter, do you update that we'd made 10x or even 3x progress on the art of human rationality and/or AI alignment (relative to the amount of progress we made on LW the quarter before)?" I think that isn't implausible, but I think that it's not obvious, and I think there are other things to focus on. To give a very concrete example that's closer to work we've done lately, if you heard that "LessWrong had gotten 10x answers to questions of >50 karma this quarter", I think I'd be marginally more confident that core intellectual progress had been made, but still that metric is obviously very goodhart-able.

A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my 'community manager hat'), is a worry about wasting people's time. In general, LessWrong is a website where we don't want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user's karma score on every page, like LW 1.0 did.

Anyway, those are some arguments against. I overall feel like we're in the 'figuring out the initial idea and product' stage rather than the 'execute' stage and is where my thoughts are spent presently. I'm interested in more things like creating basic intro texts in AI alignment, creating new types of ways of knowing what ideas are needed on the site, and focusing generally on the end of the pipeline of intellectual progress right now, before focusing on getting more people spending their time on the site. I do think I'd quickly change my mind if net engagement of the site was decreasing, but my current sense is that it is slowly increasing.

Thoughts?

I agree with this worry, though I have a vague feeling that LW is capturing and retaining less of the rationalist core than is ideal — (EDIT: for example,) I feel like I see LW posts linked/discussed on social media less than is ideal. Not for the purpose of bringing in new readers, but just for the purpose of serving as a common-knowledge hub for rationalists. That's just a feeling, though, and might reflect the bubbles I'm in. E.g., maybe LW is more of a thing on various Discords, since I don't use Discord much.

If we're getting fewer comments than we'd expect and desire given the number of posts or page visits, then that might also suggest that something's wrong with the incentives for commenting.

An opt-in way to give non-anonymous upvotes (either publicly visible, or visible to the upvoted poster, or both) feels to me like it would help with issues in this space, since it's a very low-effort way to give much more contentful/meaningful feedback than an anonymous upvote ("ah, Wei Dai liked my post" is way more information and reinforcement than "ah, my post has 4 more karma", while being a lot less effort than Wei Dai writing more comments). Also separating out "I like this / I want to see more stuff like this" votes from "I agree with this" votes (where I think "I agree with this" votes should only publicly display when they're non-anonymous). I feel like this helps with making posting more rewarding, and also just makes the site as a whole feel more hedonic and less impersonal.

you made me think of a feature i think can be great, that when someone gives a down-vote to a post the site automatically prompts him to comment. another idea is that you'll only be able to make a strong down-vote if you comment, but I'm not to sure about that.

I like this idea. I can't find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.

To encourage commentors (and posters) without cluttering up the comments thread:

Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.

Yeah, I do think having a simple non-anonymous upvoting option is promising. I wonder whether we can make it a simple extension of the strong-upvoting system (maybe have some kind of additional button show up when you strong-upvoted something, that allows you to add your name to it, though I can imagine that getting too complicated UI-wise).

Idea: if someone hovers over the karma number, a tooltip shows number of voters plus who non-anonymously upvoted; and if you click the karma number, it gives you an option to make your vote non-anonymous (which results in a private notification, plus a public notification if it's an upvote).

This seems better to me than giving the "<" or ">" more functionality, since those are already pretty interactive and complex; whereas the number itself isn't really doing much.

It seems to me that there are straightforward interventions:

(1) Provide share buttons. Most websites use share buttons to encourage readers to share content and there's no reason why it wouldn't work for us.

Share buttons also provide a way to to recognize which users share articles.

To build on your existing example, having the information "25 people come to this article because Wei Dai shared it on facebook" would be motivating information. It would also provide a way for people to follow the backlink to facebook and witness comments that happened there.

For spam fighting reasons you might set a minimum amount of karma for people to be share in such a fashion.

(2) Automatically, push newly curated posts to Twitter and a Facebook page.

[-]gjm130

I personally would prefer everything to do with Facebook, Twitter, etc., to stay as far away from LW as possible. Also, adding social-media sharing buttons seems to be asking to have more of the discussion take place away from LW, which is the exact opposite of what I thought was being discussed here.

If I write an article I care about it getting read as widely as possible. I care about engagement happening.

If an article I write on LessWrong gets shared on Facebook or Twitter I would enjoy knowing that it's shared.

I give less weight to linkposts, because the discussion/comments are split in an annoying way. It would be worse with facebook.

[-]gjm20

Sure: the author of a particular article may just want it to be read and shared as widely as possible. But what's locally best for them is not necessarily the same as what's globally best for the LW community.

Put yourself in a different role: you're reading something of the sort that might be on LW. Would you prefer to read and discuss it here or on Facebook? For me, the answer is "definitely here". If your answer is generally "Facebook" then it seems to me that you want your writings discussed on Facebook, you want to discuss things on Facebook, and what would suit you best is for Less Wrong to go away and for people to just post things on Facebook. Which is certainly a preference you're entitled to have, but I don't think Less Wrong should be optimizing for people who feel that way.

I do prefer to read and discuss on LW over discussing on Facebook. As a reader of a post on LW I don't think it harms me much when a post gets linked on Facebook.

I don't think this will result on average in fewer comments on LW.com. If people click on the link to LW within Facebook they can both comment on LW and on Facebook. Many of the people who see the post on Facebook would have never read the post otherwise or engaged with it.

External links also increase page-rank which means that posts show up more prominently on Google and additional people will discover LessWrong.

As far as optimization goes, I would prefer LW to optimize to motivate people to write great posts over organizing it in a way that optimizes the reading experience.

I do like the idea of karma-limited share buttons.

I think most of the incentives for commenting are due to network effects, i.e. not everyone is here, or I don't have evidence that they're here, so still feel like more people will see discussion on FB.

I think social proof is going to turn out to be pretty important. I'm slightly wary of it because it pushes against the "LW is a place you can talk about ideas, as much as possible without having social status play into it", but like it or not "High Profile User liked my comment", or "My Friend liked my comment" is way more motivating.

I'm currently thinking about how to balance those concerns.

I’m slightly wary of it because it pushes against the “LW is a place you can talk about ideas, as much as possible without having social status play into it”, but like it or not “High Profile User liked my comment”, or “My Friend liked my comment” is way more motivating.

As a contrary data point, I prefer LW to Facebook because the identified voting makes the social part of my brain nervous. I'm much more hesitant both to "like" things (for fear of signaling the wrong thing) and also to post/comment (if a post/comment lacks identified likes, that seems to hurt more than lack of anonymous upvotes, while the presence of identified likes don't seem to be much more rewarding than anonymous upvotes for me).

ETA: If LW implemented optional identified voting (which I'll call "like"), I'd probably use it very sparingly, because 1) I'm afraid I might "like" something that turns out to be wrong and 2) I feel like if I did use it regularly, then when I don't "like" something that people can reasonably predict me to endorse they would wonder why I didn't "like" it. So I'll probably end up "liking" something only when it seems really important to put my name behind something, but at that point I might as well just write a comment.

The above updates me toward being more uncertain about whether it's a good idea to add an 'optional non-anonymized upvoting' feature. I'll note that separating out 'I agree with this' from 'I want to see more comments like this' is potentially extra valuable (maybe even necessary) for a healthy non-anonymized upvoting system, because it's more important to distinguish those things if your name's on the line. Also, non-anonymized 'I factually disagree with this' is a lot more useful than non-anonymized 'I want to see fewer comments/posts like this'.

Can you expand on what exactly you mean with "without having social status come into play"?

Social status is a prime way human beings are motivated to do things. The prospect that I might get social status by writing a great article that people find valuable sets good incentives for me to provide quality content.

I meant in the other direction, where people judge ideas as better because higher status people said them.

This seems like the thing that happens by default and we can't really stop it, but I'm wary of UX paradigms that might reinforce it even harder.

Thanks for the reply! I see what you're saying, but here are some considerations on the other side.

Part of what I was trying to point out here is that 179 comments would not be "extraordinary" growth, it would be an "ordinary" return to what used to be the status quo. If you want to talk about startups, Paul Graham says 5-7% a week is a good growth rate during Y Combinator. 5% weekly growth corresponds to 12x annual growth, and I don't get the sense LW has grown 12x in the past year. Maybe 12x/year is more explosive than ideal, but I think there's room for more growth even if it's not explosive. IMO, growth is good partially because it helps you discover product-market fit. You don't want to overfit to your initial users, or, in the case of an online community, over-adapt to the needs of a small initial userbase. And you don't want to be one of those people who never ships. Some entrepreneurs say if you're not embarrassed by your initial product launch, you waited too long.

that metric is obviously very goodhart-able

One could easily goodhart the metric by leaving lots of useless one-line comments, but that's a little beside the point. The question for me is whether additional audience members are useful on the current margin. I think the answer is yes, if they're high-quality. The only promo method I suggested which doesn't filter heavily is the Adwords thing. Honestly I brought it up mostly to point out that we used to do that and it wasn't terrible, so it's a data point about how far it's safe to go.

A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my 'community manager hat'), is a worry about wasting people's time. In general, LessWrong is a website where we don't want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user's karma score on every page, like LW 1.0 did.

What if we could make AI alignment research addictive? If you can make work feel like play, that's a huge win right?

See also Giving Your All. You could argue that I should either be spending 0% of my time on LW or 100% of my time on LW. I don't think the argument fully works, because time spent on LW is probably a complementary good with time spent reading textbooks and so on, but it doesn't seem totally unreasonable for me to see the number of upvotes I get as a proxy for the amount of progress I'm making.

I want LW to be more addictive on the current margin. I want to feel motivated to read someone's post about AI alignment and write some clever comment on it that will get me karma. But my System 1 doesn't have a sufficient expectation of upvotes & replies for me to experience a lot of intrinsic motivation to do this.

I'd suggest thinking in terms of focus destruction rather than addictiveness. Ideally, I find LW enjoyable to use without it hurting my ability to focus.

I think instead of restricting the audience, a better idea is making discussion dynamics a little less time-driven.

  • If I leave a comment on LW in the morning, and I'm deep in some equations during the afternoon, I don't want my brain nagging me to go check if I need to defend my claims on LW while the discussion is still on the frontpage.

  • Spreading discussions out over time also serves as spaced repetition to reinforce concepts.

  • I think I heard about research which found that brainstorming 5 minutes on 5 different days, instead of 25 minutes on a single day, is a better way to generate divergent creative insights. This makes sense to me because the effect of being anchored on ideas you've already had is lessened.

  • See also the CNN effect.

Re: intro texts, I'd argue having Rohin's value learning sequence go by without much of an audience to read & comment on it was a big missed opportunity. Paul Christiano's ideas seem important, and it could've been really valuable to have lively discussions of those ideas to see if we could make progress on them, or at least share our objections as they were rerun here on LW.

Ultimately, it's the idea that matters, not whether it comes in the form of a blog post, journal article, or comment. You mods have talked about the value of people throwing ideas around even when they're not 100% sure about them. I think comments are a really good format for that. [Say, random idea: what if we had a "you should turn this into a post" button for comments?]

Just wanted to say I agree regarding the problems with conversation being "time driven" (I've previously suggested a similar problem with Q&A)

One idea that occurs to me is to personalise Recent Discussion on the homepage. If I've read a post and even more if I've upvoted it then I'm likely to be interested in comments on that thread. If I've upvoted a comment then I'm likely to be interested in replies to that comment.

If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I'd get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I'm replying to an old post.

More articles, fewer comments per article -- perhaps these two are connected. ;)

In general, I agree that I would also prefer deeper debates below the articles, and more smart people to participate at them. However, I am afraid that the number of smart people on internet is quite limited (perhaps more than even the most pessimistic of us would imagine), and they usually have other things to do with higher priority than commenting on LW.

Also, LW is no longer new and exciting -- the people who wanted to say something, often already said it; the people who would be attracted to LW probably already found it; the people able and willing to write high-quality content typically already have their personal blogs. Of course this does not stop the discussion here completely; it just slows it down.

There was a time before LW2 when people complained that LW wasn't getting enough posts, but no one complained that posts weren't getting enough comments. I checked my own post history and there's a noticeable decline in average number of comments per post between now and 4 years ago (there's a gap in my posting history so I'm sure exactly when the decline in comments happened). Maybe someone could plot the average number of comments for all posts over time and check if it correlates with any changes that have been made to LW?

My guess is maybe it has to do with the old 10x karma multiplier for posts, which paradoxically discouraged people from making posts and redirected their energy into commenting. If that's the case maybe we can institute a smaller karma multiplier to better balance posts vs comments.

Maybe it has something to do with this question you asked? Maybe letting people leave anonymous comments if they're approved by the post author or something like that could help?

I’ve been reading basically every post for the past few months. I don’t usually leave comments though, unless it’s to support and thank the author. (Thanks for writing this! Funny enough I also noticed recently how few comments there are, and it seemed worth bringing up.) I guess I feel like I just don’t have much to add to most posts.

Regarding RSS: yes, it is still running. In fact, that’s how I found this post!

Have any posts from LW 2.0 generated new conceptual handles for the community like "the sanity waterline"? If not, maybe it's because they just aren't reaching a big enough audience.

Doesn't this get the causality backwards? I'm confused about the model that would generate this hypothesis.

One way I can imagine good concepts not taking root in "the community" is if not enough of the community is reading the posts. But then why would (most of) the prescriptions seem to be about advertising to the outside world?

As someone new to the community i can testify that i probably would have had more motivation to write if i gotten more comments and discussion. especially since, at least for me, the things i think to publish aren't necessarily very polished ideas, but things i would like to get input on.

Get Scott Aaronson to mention the fact that LW 2.0 is a real-life instance of eigendemocracy in one of his "announcements" posts. The credit is his for inspiring the new voting system.

Have you talked about what LW2's system actually is, in detail, anywhere?

I consider these sorts of things (collaborative filtering) to be incredibly important, it's become obvious that, say, reddit's one account, one vote in any context system is inadequate.

It seems to me that eigentrust, or something like it, probably models rank aggregation correctly. That is, I'm getting a sense that you could probably sort content very efficiently by asking users for comparison judgements between candidates, building a graph where each comparison is an edge, then running eigentrust to figure out what's at the top.

So I've been thinking about eigentrust. Gradually working my way through this Eigentrust++ paper (though I have no idea whether this is a good place to start digging into the literature and probably wont make it very far)

My current position is that sometime around now, or maybe in the next two or three months, is the time to experiment with getting more core engagement and generally reaching a broader audience. Q&A in particular is something that I can imagine productively scaling to a larger audience, in a way that actually causes the contributions from the larger audience to result in real intellectual progress.

Ben summarized a lot of my general thoughts on this, but overall I think I agree with a good chunk of your sentiment. I think all the four things you list seem like good things to try, though I am kind of holding off on doing them until I feel more comfortable with the scalability of our intellectual progress machine.

Q&A in particular is something that I can imagine productively scaling to a larger audience, in a way that actually causes the contributions from the larger audience to result in real intellectual progress.

Do you mean scaling it as is, or in the future?

I think there's a lot of potential to innovate on the Q&A system, and I think it'd be valuable to make progress on that before trying to scale. In particular, I'd like to see some method of tracking (or taking advantage of) the structure behind questions -- something to do with how they're related to each other.

Maybe this is as simple as marking two questions as "related" (as I think you and I have discussed offline). Maybe you'd want more fine-grained relationships.

It'd also be cool to have some way of quickly figuring out what the major open questions are in some area (e.g. IDA, or value learning), or maybe what specific people consider to be important open questions.

I think it's already a bit more scalable than what we had before, but I was mostly referring to a future version of the Q&A system, after we added some more related question functionality, and made it so that people notice questions for longer than they stick around on the frontpage, and some way to keep track of major open questions.

I would think related questions is something to put off until you have lots of question data on which to tune your relatedness metric.

FWIW, I was thinking of the related relationship as a human-defined one. That is, the author (or someone else?) manually links another question as related.

I think having it be automated will help posts avoid getting forgotten in the sands of time.

The credit is his for inspiring the new voting system.

Are you sure? I don't think I have read the post you linked but I do remember having discussed changing the voting system in such a direction on the old LessWrong.

This post cites Scott Aaronson, but maybe there were other discussions too.

I think people are just writing about less accessible things on average. I wouldn't want to have more comments just by not talking about abstruse topics, at the moment. I love you all, but I also love AI safety :P

I briefly looked at the comments within 1 year on some old LW posts, and the numbers seem to match from then too - only ~25 comments on the more rareified meta-ethics posts, well below average.

[-][anonymous]40

Hi all,

I run growth for a decently large startup, if you wanted to increase the audience for LW, there are numerous things you could do differently.

For better or for worse, individual blogs/content websites are fading on the web with a few notable exceptions that do growth really well. If you want to increase your audience, engaging with the large networks is generally a good first start.

Speaking as an outsider, the site seems geared to explicitly not wanting a large audience. That's a completely reasonable decision. You should be explicit about what the goals are for this platform.

You're right that this site is geared to not wanting a large audience in absolute terms; this post is implicitly about having a relatively larger share of the small pool of people who are intellectually engaged with LW-relevant topics.

So this post seemed to get a pretty healthy number of comments. I think others have suggested the idea of discussion prompts or discussion questions to engage commenters at the end of the post; maybe just having this post say "I want more comments" played a similar role. I was implicitly assuming that readership and comment numbers track each other closely, but maybe that's not true. It might be interesting to analyze the LW article database and see how ratios between viewership, comments, and votes have changed over time.

See Writing that Provokes Comments. I think this post was something more people felt qualified to have opinions on, dealt with (an aspect of) social reality/coordination (which people are primed to care about), and left concrete things to do in the comments section. (either in the form of disagreement or proposing solutions)

I think it was mostly due to the topic, which was important and something that lots of people felt a desire to follow and contribute to. (Though I agree discussion prompts are good.)

[-]Elo20

I was coining more terms and bringing them to lesswrong. I still do that but me doing that depends on my available time to write. Which is variable.

Well, what's the purpose of a bigger audience?

What is the purpose of LW in the first place?

Folks! What you have here is the explanation of all man made issues and thus a

MANUAL TO SAVING HUMANITY!

THIS is what makes LW (or at least it's content) not only WORTHY but OBLIGED to get a bigger audience!!!

I am passionate for and working on this - who is with me?

Applause lights? I don't think that's a good idea, in most cases.

And it's far from obvious that a greater audience is the thing to optimize for, in order to maximize LW's "saving humanity" effect. Goodhart's Law, Eternal September, etc.

Sure. LW will remain a rather theoretical/academic community.

What I am looking to find or to create is a new platform outside LW with more practical use and educational character.

In case you're not aware, you should probably avoid applause lights like that even in the wider world - applause lights for unusual beliefs just make you look like a kook/quack. (Which is instrumentally harmful, if you don't want people to immediately dismiss you.)

I would hardly ever use such tactics.

I rather wrote so because LW community seems

not to be aware of the possible impact rationality could have on our world

and/or

not ready to share/apply it, ie. DO SOMETHING with it.

In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.

[The] LW community seems not to be aware of the possible impact rationality could have on our world.

I'm not sure how you've gotten that impression. I have the exact opposite impression - the LW community is highly aware of the importance and impact of rationality. That's kind of our thing. Anyway, in the counterfactual case where LW didn't think rationality could change the world, throwing applause lights at it would not change its mind. (Except to the extent that such a LW would probably be less rational and therefore more susceptible to applause lights.)

not ready to share/apply it, ie. DO SOMETHING with it.

What do you have in mind?

I think LW is already doing many things.

1. The Machine Intelligence Research Institute. If I recall correctly, Yudkowsky created Less Wrong because he noticed people generally weren't rational enough to think well about AGI. It seems to have paid off. I don't know how many people working at MIRI found it through LW, though.

2. The Center for Applied Rationality. Its purpose is to spread rationality. I think this is what you were arguing we should do. We're doing it.

3. Effective altruism. LW and EA are two distinct but highly overlapping communities. This is applied altruistic rationality.

I'm not saying that there's no room for more projects, but rather that I don't think your criticisms of LW are accurate.

In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.

What pattern is that? Is your criticism just that we react similarly on different occasions to similar comments? I think that's a human universal.

Did you realize THIS POST is actually bearing BIGGER AUDIENCE in it's title?

I was refering on this - i wanted to know FOR WHAT PURPOSE the threatstarter

even considered a bigger audience.

You twisted my ideas.

"LW explains everything and will save the world. Therefore we are obligated to expand it as much as possible." (my understanding of the great-grandparent comment)

Are you saying that was an implied question? It seemed more like a statement to me.

Anyway, I agree that many people here think that we should expand. I'm not criticizing you for saying that we should expand. I'm criticizing you for just saying that we should expand, when that's already been said!

The original post said "I think we should try expanding. Here's some ideas on how to expand." Your comment said "I think we ought to expand because LW can save the world." It didn't fit well in context - it wasn't really continuing the conversation. The only thing it added was "LW can save the world," with no explanation or justification. I don't think that's useful to say.

Maybe if many people were saying "why should Less Wrong even get bigger?", then you could have responded to them with this. That would have made more sense.