There is a TLDR at the bottom

Lots of people really value the lesswrong community but aren't sure how to contribute. The rationalist community can be intimidating. We have a lot of very smart people and the standards can be high. Nonetheless there are lots of concrete ways a normal rationalist can help improve the community. I will focus on two areas - engaging with content and a list of shovel ready projects you can get involved in. I will also briefly mention some more speculative ideas at the end of the post.

1) Engaging with Content:

I have spoken to many people I consider great content creators (ex: Zvi, Putanumonit, tristanm). It’s very common to wish their articles got more comments and engagement. The easiest thing you can do is make a lesswrong account and use the upvote button. Seeing upvotes really does motivate good writers. This only works for lesswrong/reddit but it makes a difference. I can think of several lw articles with less upvotes than people who have personally told me the article was great (ex: norm-one-principle by tristanm [1]).

Good comments tend to be even more appreciated than upvotes, and comments can be left on blog posts. If a post has few comments, then almost any decent quality comment is likely to be appreciated by the author. If you have a question or concern, just ask. Many great authors read all their comments, at least those left in the first few days, and often respond to them. Lots of readers comment very rarely, if at all. 95.1% of people who took the SSC survey comment less than once a month and 73.6% never comment at all [2]. The survey showed that survey takers were a highly engaged group who had read lots of posts. If a blog has very few comments I think you should update heavily towards “it’s a good idea for me to post my comment”.

However, what is most lacking in the rational-sphere is positive engagement with non-controversial content you enjoyed.  Recently the SSC sub-reddit found that about 95% of recent content was either in the culture-war thread or contained in a few threads the community considered low quality (based on vote counts) [3]. You can see a similar effect on lesswrong by considering the Dragon Army post [4]. Most good articles posted recently to lesswrong get around 10 comments or less. The Dragon Army post got over 550. I am explicitly not asking people to avoid posting in controversial threads; doing so would be asking a lot of people. But “engagement” is an important reward mechanism for content creators. I do think we should reward more of the writers we find valuable by responding to them with positive engagement.

It’s often difficult to write a comment on a post that you agree with that isn't just “+1 nice post.” Here are some strategies I have found useful:

- If the post is somewhat theoretical try to apply it in a concrete case. Talk about what difficulties you run into and what seems to work well.

- Talk about how the ideas in the post have helped you personally. For example you can say that never understood concept X until you read the post.

- Connect the post to other articles or essays. It’s usually not optimal to just post a link. Either summarize the other article or include a relevant, possibly extended, quote. Reading articles takes time.

- Speculate a little on how the ideas in the article could be extended further.

It’s not just article writers who enjoy people engaging with their work. People who write comments also appreciate getting good responses. Posting high quality comments, including responses to other comments, encourages other people to engage more. You can personally help get a virtuous cycle going. As a side note I am unsure about the relative values of posting a comment directly on a blog vs reposting the blogpost to lesswrong and commenting there. Currently lesswrong is not that inundated with reposts but it could get more crowded in the future. In addition, I think article authors are less likely to read lesswrong comments about their post, but I am not confident in the effect size.

2) Shovel Ready Projects:

-- Set up an online Lesswrong gaming group/server, ideally for a popular game. I have talked to people and Overwatch seems to have a lot of interest. People seemed to think it would really be a blast to play Overwatch with four other rationalists. Another popular idea is Dungeons and Dragons. I am not a gaming expert and lots of games could probably work but I wanted to share the feedback I got. Notably there is already a factorio server [5].

-- Help 'aggregate' a best of rationalist_tumblr effort posts. Rat_Tumblr is very big and hard to follow. Effort posts are mixed in with lots of random things. One could also include the best responses. There is no need to do this on a daily basis. You could just have a blog that only reblogs high-quality effort posts. I would personally follow this blog and would be willing to cooperate in whatever ways I could. I also think this blog would bring some "equality" to rat_Tumb. The structure of tumblr implies that it’s very hard to get readers unless a popular blog interacts with you. People report getting a "year’s worth of activity in a day" when someone like Eliezer or Ozy signal boosts them. An aggregator would be a useful way for less well known blogs to get attention.

-- Help the lesswrong wiki. Currently a decent fraction of lw-wiki posts are fairly out of date. In general the wiki could be doing some exciting thing such as: a distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research. There is currently a project to modernize the wiki [6]. Even if you don't get involved in the more ambitious parts of the wiki you could re-write an article. Re-writing an article doesn't require much commitment and would provide a concrete benefit to the community. The wiki is prominently linked and the community would get a lot of good PR from a polished wiki.

-- Get involved with effective altruism. The Center for Effective Altruism recent posted a very high quality involvement guide [7]. It’s a huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

-- Get more involved in your local irl rationalist group. Many group leaders (ex: Vanier) have suggested that it can be very hard to get members to lead things. If you are interested in leadership and have a decent reputation your local community might need your help.

I would be very interested in comments suggesting other projects/activities rationalists can get involved with.

3) Conclusion 

As a brief aside I want to mention that I considered writing about outreach. But I don't have tons of experience at outreach and I couldn't really process the data on effective outreach. The subject seems quite complicated. Perhaps someone else has already worked through the evidence. I will however recommend this old article by Paul Christiano (now at open AI) [8]. Notably the camp discussed in this pos did come eventually come into being. It’s not a comprehensive article but it has some good ideas. This guide to “How to Run a Successful Less Wrong Meetup” [9] is extremely polished and has some interesting material related to outreach and attracting new members.

It’s easy to think your actions can't make a difference in the community, but they can. A surprisingly large number of people see comments on lesswrong or r/SSC. Good comments are highly appreciated. The person you befriend and convince to stick around on lesswrong might be the next Scott Alexander. Unfortunately, a lot of the time gratitude and appreciation never gets expressed; I am personally very guilty on this metric. But we are all in this together and this article only covers a small sample of the ways you can help make the community better.

If you have feedback or want any advice/help and don't want to post in public I would be super happy to get your private messages.

4) TLDR

- Write more comments on blog posts and non-controversial posts on lw and r/SSC

- Especially consider commenting on posts you agree with

- People are more likely to comment if other people are posting high quality comments.

- Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership local rationalist group

5) References: 

[1] http://lesswrong.com/r/discussion/lw/p3f/mode_collapse_and_the_norm_one_principle/

[2] http://slatestarcodex.com/2017/03/17/ssc-survey-2017-results/

[3] https://www.reddit.com/r/slatestarcodex/comments/6gc7k8/what_can_be_done_to_make_the_culture_war_thread/

[4] http://lesswrong.com/lw/p23/dragon_army_theory_charter_30min_read/

[5] factorio.cypren.net:34197 . Modpack: http://factorio.cypren.net/files/current-modpack.zip

[6] http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/

[7] https://www.effectivealtruism.org/get-involved/

[8] http://lesswrong.com/lw/4v5/effective_rationality_outreach/

[9] http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/

New Comment
37 comments, sorted by Click to highlight new comments since:

There's some gnashing of teeth in the comments about how hard it is to create positive social norms of collaboration, and any characteristic of the rationality community can be brought up to make it seem even harder. I think this is missing the overarching point of this post, which to me is: just do it.

Be the social norm you want to see in the world. Comment, compliment, help someone out.

This post is good not just for the advice it gives to others, but because it comes from a person who is mostly leading by example: deluks posts the super-useful bi-weekly rationality feed, is an organizer in our local meetup group, and is contributing to community projects.

Good social norms can seem hard in the abstract, but once you actually start living them they seem very natural.

[-][anonymous]00

Thanks for writing this sort of good-to-state-explicitly thing! (A thing I've been trying to do lately is to explicitly type out obvious things like thanks to make appreciation more visible for things that have been valuable for me.)

I have a few thoughts about this.

First I believe that there is always likely to be a much higher ratio of critique than content creation going on. This is not a problem in and of itself. But as has been mentioned and which motivated my post on the norm one principle, heavy amounts of negative feedback are likely to discourage content creation. If the incentives to produce content are outweighed by the likelihood that there will be punishments for bad contributions, then there will be very little productive activity going on, and we will be filtering out not just noise but also potentially useful stuff as well. So I am still heavily for establishing norms that regulate this kind of thing.

Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down. This is sort of what happened with the original creators of this site. This is not something to prevent, simply something we should expect to happen periodically. Ideally we would like people to still engage with each other even if primary content producers leave.

It's hard to figure out what the "consensus" is on specific ideas, or whether or not they should be pursued or discussed further, or whether people even care about them still. Currently the way content is produced is more like a stream of consciousness of the community as a whole. It goes in somewhat random directions, and it's hard to predict where people will want to go with their ideas or when engagement will suddenly stop. I would like some way of knowing what the top most important issues are and who is currently thinking about them, so I know who to talk to if I have ideas.

This is related to my earlier point about content creators leaving. We only occasionally get filtered down information about what they are working on. If I wanted to help them, I don't know who to contact about that, or what the proper protocols are about trying to become involved in those projects. I think the standard way these projects happen is a handful of people who are really interested simply start working on it, but they are essentially radio silent until they get to a point where they are either finished or feel they can't proceed further. This seems less than ideal to me.

A lot of these problems seem difficult to me, and so far my suggestions have mostly been around discourse norms. But again this is why we need more engagement. Speak up, and even if your ideas suck, I'll try to be nice and help you improve on them.

By the way, I think it's important to mention that even asking questions is actually really helpful. I can't count the number of times someone has asked me to clarify a point I made about something, and in the process of clarifying, I actually discovered some new issues or important details that I had previously missed, and it caused me to update because of that. So even if you don't think you can offer much insight, even just asking about things can be helpful, and you shouldn't feel discouraged about doing this.

I would like some way of knowing what the top most important issues are

LW was founded because Eliezer decided that making people think more rationally would help prevent AI disaster. That defines a scale of usefulness:

1) Math ideas (decision theory, game theory, logical induction, etc) and philosophy ideas (orthogonality thesis, complexity of value, torture vs dust specks, etc) that are directly related to preventing AI disaster. There's surprisingly many such ideas, because the problem is so sprawling.

2) Meta ideas that improve your thinking about (1), like avoiding rationalization, changing your mind, noticing confusion, mysterious answers, etc.

3) Practice problems for (1) and (2). This can be anything from quantum physics to religion, as long as there's a lesson that feeds back into the main goal.

At some point the community took another step toward meta, and latched onto everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top. That was mostly a failure, with the exception of some brilliant ideas like "politics is the mind-killer" that spilled over from (2) and were promptly forgotten as people slipped back into irrationality. (Another sign of slipping back is the newly positive attitude toward religion.) It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn't such a problem.

Another sign of slipping back is the newly positive attitude toward religion.

Is it really that bad? I haven't noticed, but perhaps I was not paying enough attention, or my unconsciousness was trying to protect me by filtering out the most horrible things.

In case you only meant websites other than LW, I guess the definition of "rationalist community" has grown too far, and now means more or less "anyone who seems smart and either pays lip service to reason or is a friend with the right people".

Not sure what conclusion should I make on this. I always felt wrong about censoring dissenters, and I still kinda do, but sometimes tolerating one smart religious person or one smart politically mindkilled person is all it takes to move the Overton window towards tolerating bullshit per se (as opposed to merely tolerating that this one specific smart person also believes some bullshit).

I'd like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion. I guess I can dream.

everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top.

Equations like "productivity equals intelligence plus joy minus square root of area under hyperbole of your procrastination" feel like self-help with rationality as attire.

But there is also some boring advice like: "pomodoros seem to help most people".

I'd like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion.

In good old fashioned tradition, we might start with tabooing religion. I don't think cousin_it has a problem with having smart religious people on LessWrong. He would likely prefer it if Ilya would still participate on LessWrong. I think his concern is rather about a project like Dragon Army copying structures from religious organizations and the LessWrong community having solstice celebrations filled with ritual.

You're right on both counts. Ilya is awesome, and rationalist versions of religious activities feel creepy to me.

I agree that there are different things one can possibly dislike about religion, and it would be better to be more precise.

For me, the annoying aspects are applying double standards of evidence (it would be wrong to blindly believe what random Joe says about theory of relativity, but it is perfectly okay and actually desirable to blindly believe what random Joe said a few millenia ago about the beginning of universe), speaking incoherent sentences (e.g. "god is love"), twisting one's logic and morality to fit the predetermined bottom line (a smart and powerful being who decides that billions of people need to suffer and die because someone stole a fucking apple from his garden is still somehow praised as loving and sane), etc. If LW is an attempt to increase sanity, this is among the lower hanging fruit. It's like someone participating on a website about advanced math, while insisting that 2+2=5, and people saying "well, I don't agree, but it would be rude to publicly call them wrong".

But I can't talk for cousin_it, and maybe we are concerned with completely different things.

I personally can't remember anybody saying "God is love" on LessWrong. On the other hand, I read recently of people updating in the direction that kabbalistic wisdom might not be completely bogus after reading Unsong.

Scott has this creepy mental skill where he could steelman a long string of random ones and zeroes, and some people would believe it contains the deepest secret to the universe.

I'd like to imagine that Scott is doing this to create a control group for his usual articles. By comparing how many people got convinced by his serious articles and how many people got convinced by his attempts to steelman nonsense, he can evaluate whether people agree with him because of his ideas or because of his hypnotic writing. :D

I guess the definition of "rationalist community" has grown too far, and now means more or less "anyone who seems smart and either pays lip service to reason or is a friend with the right people".

If you really think that you should add the definition here: https://wiki.lesswrong.com/wiki/Rationalist_movement

It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn't such a problem.

I don't think the problem is that self-help isn't a hard intellectual problem. It's rather that it's a problem that has direct application to the daily life and as such people feel the need to strong opinions about it, even when those aren't warranted. It's similar to politics in that regard.

Good point, agreed 100%.

"Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down."

That is a rather good point. The point suggests that if we want to keep lesswrong a healthy community we need to maintain a strong pipeline.

I see both sides of the 'radio silence" thing. On one hand its good to let other people know about your project in case they want to get involved. On the other hand making a project "public" creates alot of stuff to deal with. We both agree public criticism can be quite harsh. Organazing a group effort is difficult. Maintaining a cohesive vision becoems more difficult the more people that are invovled. Finally a decent number of hyped rationalsit project seemed to have fundamental problems (Arbital comes to mind*).

My personal intuition is that in many cases its better to take the middle ground about when to take ideas public. Put together something like a "minimum viable project" or at least a true" proof of concept". Once you have that its easier to keep a coherent vision and its more likely the project is a good idea. It is suboptimal to spend lots of time organizing people and dealing with feedback before you have determined your project is a fundamentally sound idea.In this post I tried to mention projects which were already underway or that could be done on a small scale. I should note I am not very confidant in my preceding intuition and would welcome your feedback.

*I am aware of the personal problems that hurt Arbital. I am also aware that there are/were plans for it to pivot directions to a micro-blogging platform. But the original vision of arbital seems flawed. Ther arbital leadership basically confimed this some time ago.

Agree about creation:critique ratio. Generativity/creativity training is the rationalist communities' current bottleneck IMO.

And I think we're mostly still trapped in a false implicit dogma that creativity is an innate talent that is possessed by some rare individuals and can't be duplicated in anyone who isn't already creative. What I'm hoping to be true is that you can train people to come up with good ideas, and that more importantly, if we can harness the ability of this community to look for errors in reasoning, even bad ideas can slowly be transformed into good ones, as long as we can come up with a decent framework for making that process robust.

I stopped commenting on slatestarcodex because they disabled anonymous accounts and I didn't feel like signing up because the comments weren't that important for me anyway, plus there's enough comments down there already that there's too much noise to communicate anything.

I also don't think that Scott gets much motivation from additional comments. The value of a commenting is higher in other blogs or LW.

Hmm, true.

I'm not sure you understood my other point, though - using the statistics for the ssc survey might contain a bias because see reasons above.

Unfortunately Scott doesn't seem to have a question that asks how much other blog posts besides SSC his readers read but I would estimate that number to be high.

This.

There is a certain optimal size range for an online community: too few people and it's stagnant, too many people and it's a cacophony of noise. Successful communities solve this problem by subdividing (see e.g. Reddit).

I think SSC is already too big and the subdivision process is starting, e.g. there is an SSC subreddit which syphons off some comments, plus there's Slack, etc.

I think you are spot on with your points about engagement. One of the thing that strikes me when reading old sequence posts is the amount of positive feedback Eliezer gets in the comments (way more than any LW posts ever get nowadays). I imagine this feedback played a role in motivating him to blog daily for such an extended run.

I wonder how many comments non-Eliezer/Scott(Yvain) threads got on the old lesswrong. Though even if other posters got alot of comments its possible the pressence of Eliezer/Scott drove alot of people to check the site (and subsequently comment on many threads). But it would still be a good sign if the engagement level was higher on general non-seqence threads. Perhaps we could eventually re-start the old dynamics.

I suspect that at the beginning LW was simply too small to provide enough meta-contrarian points to people attacking Eliezer.

When it's just a personal blog of two people (Yudkowsky, Hanson), if you come and attack them, you are an asshole. When it becomes a popular website, coming and attacking the owner feels like a heroic fight against "the establishment".

Overcoming Bias was technically a group blog, with a long list of contributors on the side. But in practice Yudkowsky/Hanson wrote the vast majority of the posts. I don't know about traffic volume, but commenting volume did seem lower than LW in its heyday (though possibly more than LW nowadays). There was this commenter Caledonian that Eliezer was always complaining about.

These are just my memories, you could easily look this stuff up in the Internet Archive or whatever.

One problem here is that we are trying to optimize a thing that is broken on an extremely fundamental level.

Rationality, transhumanism, hardcore nerdery in general attracts a lot of extremely socially dysfunctional human beings. They also tend to skew towards a ridiculously biologically-male-heavy gender distribution.

Sometimes life throws unfair challenges at you; the challenge here is that ability and interest in rationality correlates negatively with being a well-rounded human.

We should search very hard for extreme out-of-the-box solutions to this problem.

One positive lead I have been given is that the anti-aging/life-extension community is a lot more gender balanced. Maybe LW should try to embrace that. It's not a solution, but that's the kind of thing I'm thinking of.

When I read "extreme out-of-the-box solutions to this problem ... the anti-aging/life-extension community", I almost expected the solution to be something like: "Given literally unlimited time, even nerds can learn social skills." :D

I am not familiar with the anti-aging/life-extension community, so I can just make a guess: it is also about professions. Talking about artificial intelligence (and physics) will attract programmers (and physicists), but talking about curing aging will also attract doctors, biologists, and chemists.

Part of the problem at the moment is that the community doesn't have a clear direction like it did when Elizier was in charge. There was talk about starting an organisation in charge of spreading rationality before, but this never actually seems to have happened. I am optimistic about the new site that is being worked on though. Even though content is king and I don't know how much any of the new features will help us increase the amount of content, I think that the psychological effect about having a new site will be massive.

Its unclear the community can or should have a "leader" again. Alot of the community no longer suffiently agrees with Eliezer. The only person enough people would consent to follow is Scott 'The Rightful Caliph' Alexander. And Scott doesn't want the job.

I think the community can flourish despite remaining de-centralized. But its admittedly trickier.

Regarding setting up a gaming server, that seems like something feasible to spin off the Slack or Discord chats. If you're a gamer, Discord is pretty cool an I recommend it. (I was once a gamer like you but I decided I was wasting my life so waste my life writing wikis instead ;) )

agree with that isn't just “+1 nice post.” Here are some strategies...

How about the strategy of writing "+1 nice post"? Maybe we're failing to see the really blatantly obvious solution here....

+1 nice post btw

+1 nice comment; funny and insightful

Things like this are nice when made rarely, and horrible when they become the norm. How to prevent that? Have a limited number of +1's per user per week? (Or per total karma?) Making them a scarce resource could make them even more valuable...

Yeah, I mean maybe just make them float to the bottom?

[-][anonymous]20

Thank you for putting this together! This was useful to read, and I think I'll try to have a general policy of commenting more on posts.

In regards specifically to outreach, as Paul mentions in his post on effective rationality outreach, he is one of the people behind SPARC, a Bay Area-based program for high schoolers with a math/rationality curriculum.

Also related, I'm currently helping with ESPR which is the European-offshoot of SPARC. Last year, we were EuroSPARC (and there was a short post on us here on LW). If you're interested in helping and/or talking, we're a volunteer-run effort, and you can reach us at staff@espr-camp.org .

Help the lesswrong wiki.

Is any effort to improve the wiki now in danger of disappearing once LW 2.0 comes around?

Is any effort to improve the wiki now in danger of disappearing once LW 2.0 comes around?

I don't think we plan to remove the wiki, but I also don't think we plan to improve it. (As I understand it, different code is powering the two, and we can switch over one without interfering with the other.)

If it becomes especially good, then we might send dev resources that way, but I'm currently pessimistic about that.

Worse case scenario I'd take it over entirely.