I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.
Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.
These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.
My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.
The fun...
Thank you all so much for doing this!
Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.
I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.
The feature map link seems to be absent.
Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.
You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:
I feel that this comment deserves a whole post in response, but I probably won't get around to that for a while, so here is a short summary:
I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.
LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn't mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.
I actually think that one of the biggest problem with Effecti
You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.
On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weir...
I've heard that in some cases, humans regard money to be an incentive.
Integrating Patreon, Paypal or some existing micropayments system could allow users to not only upvote but financially reward high-value community members.
If Less Wrong had a little "support this user on Patreon" icon next to every poster's username, I would certainly have thrown some dollars at more than a handful of Less Wrong posters. Put more explicitly - maybe Yvain and Eliezer would be encouraged to post certain content on LW2.0 rather than SSC/Facebook if they reliably got a little cash from the community at large every time they did it.
Speaking of the uses of money, I'm fond of communities that are free to read but require a small registration fee in order to post. Such fees are a practically insurmountable barrier to trolls. Eugine Nier could not have done what he did if registering an account cost $10, or even $1.
Does anyone know the literature on intrinsic motivation well enough to comment on whether paying users to post is liable to undermine other sources of motivation?
The registration fee idea is interesting, but exacerbates the chicken and egg problem inherent in online communities. I also have a hunch that registration fees tend to make people excessively concerned with preserving their account's reputation (so they can avoid getting banned and losing something they paid money for), in a way that's cumulatively harmful to discourse, but I can't prove this.
Yep!
As one might expect, money is often a deterrent for actual habituation.
EDIT: Additional clarification:
The first link shows that monetary payment is only effective as a short-term motivator.
The second link is a massive study involving almost 2,000 people which tried to pay people to go to the gym. We found that after the payment period ended, gym attendance fell back to roughly pre-payment levels.
if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the page you are familiar with.
Idea: give sequence-writers the option to include quizzes because this (1) demonstrates a badgeholder actually understands what the badge indicates they understand (or, at least, are more likely to) and (2) leverages the testing effect.
I await the open beta eagerly.
Also I have already read them all more than once and don't plan to do so again just to get the badge :)
In any case, I like the idea, although it may be in the backlog for awhile.
Although, it occurs to me that the benefit of an open source codebase that actually is reasonable to learn is that anyone that wants something like this to happen can just make it happen.
Thank you for making this website! It looks really good and like someplace I might want to crosspost to.
If I may make two suggestions:
(1) It doesn't seem clear whether Less Wrong 2.0 will also have a "no politics" norm, but if it doesn't I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc. I think that culture war stuff is salacious enough that people love discussing it in spite of its obvious unimportance, and it would be good to have a way to dissuade that. Personally, I've tended to avoid online rationalist spaces where I can't block people who annoy me, because culture war stuff keeps coming up and when interacting with certain people I get defensive and upset and not in a good frame for discussion at all.
(2) Some inconspicuous way of putting in assorted metadata (content warnings, epistemic statuses, that sort of thing) so that interested people can look at them but they are not taking up the first 500 words of the post.
I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there's enough places for discussion of those topics already.
(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater internet, I think, and we do a pretty good job keeping discussion to that thread only, but maintaining both of these requires a lot of active moderation, and the thread absolutely affects the tone of the rest of the subreddit even so.)
I expect the norm to be "no culture war" and "no politics" but there to be some flexibility. I don't want to end up with a LW where, say, this SSC post would be banned, and banning discussions of the rationality community that might get uncomfortable seems bad, and so on, but also I don't want to end up with a LW that puts other epistemic standards in front of rationality ones. (One policy we joked about was "no politics, unless you're Scott," and something like allowing people to put it on their personal page but basically never promoting it accomplishes roughly the same thing.)
Sorry, this might not be clear from the comment, but as a prospective writer I was primarily thinking about the comments on my posts. Even if I avoid culture war stuff in my posts, the comment section might go off on a tangent. (This is particularly a concern for me because of course my social-justice writing is the most well-known, so people might be primed to bring it up.) On my own blog, I tend to ban people who make me feel scared and defensive; if I don't have this capability and people insist on talking about culture-war stuff in the comments of my posts anyway, being on LW 2.0 will probably be unpleasant and aversive enough that I won't want to do it. Of course, I'm just one person and it doesn't make sense to set policy based on luring me in specific; however, I suspect this preference is common enough across political ideologies that having a way to accommodate it would attract more writers.
I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc.
To clarify: you want people to be able to apply this tag to their own posts, and in posts with it applied, culture war discussion is forbidden?
I approve of this.
I also wonder if it would be worth exploring a more general approach, where submitters have some limited mod powers on their own posts.
I feel more optimistic about this project after reading this! I like the idea of curation being a separate action and user-created sequence collections that can be voted on. I'm... surprised to learn that we had view tracking that can figure out how much Sequence I have read? I didn't know about that at all. The thing that pushed me from "I hope this works out for them" to "I will bother with this myself" is the Medium-style individual blog page; that strikes a balance between desiderata in a good place for me, and I occasionally idly wish for a place for thoughts of the kind I would tweet and the size I would tumbl but wrongly themed for my tumblr.
I don't like the font. Serifs on a screen are bad. I can probably fix this client side or get used to it but it stood out to me a surprising amount. But I'm excited overall.
I don't like the font. … I can probably fix this client side or get used to it but it stood out to me a surprising amount.
My other comment aside, this is (apart from the general claim) a reasonable user concern. I would recommend (to the LW 2.0 folks) the following simple solution:
This should satisfy most people, and would still preserve the site's aesthetics.
As with many such things, there are standard, canonical solutions to your concerns.
In this case, the answer is "select pairs/sets of fonts that are specifically designed to have the same width in both the serif and the sans variants". There are many such "font superfamilies". If you'd like, I can draw up a list of recommendations. (It would be helpful if you could let me know your constraints w.r.t. licensing and budget.)
Theme variants do not have to be comprehensive redesigns. It is eminently possible to design a set themes that will not lead to the content being perceived very differently depending on the active theme.
P.S.:
Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify.
I suspect the distinction you're looking for, here, is between transitional serifs (of which Charter, the Medium font, is one, although it's also got slab-serif elements) and the quite different old-style serifs (of which ET Book, the current LW 2.0 font, is one). (There are also other differences, orthogonal to that distinction—such as ET Book's considerably smaller x...
Here's what I think is the conventional wisdom about serif/sans-serif; I don't think it is in any way contradicted by the material you've linked to.
Text that is small when measured in display pixels is generally harder to read fluently when set in a typeface with serifs.
Only interested in readers with lovely high-DPI screens? Go ahead, use serifs everywhere; it'll probably be fine. Writing a headline, or a splash screen with like 20 words on it? Use serifs if they create the effect you want; the text won't be small enough, nor will there be enough of it in a block, for there to be a problem.
But if you are choosing a typeface for substantial chunks of text that might be read on a not-so-great screen, you will likely get better results with a sans-serif typeface.
So, what about those domain experts? Jakob Nielsen is only addressing how things look on "decent computer screens with pixel densities of 220 PPI or more". Design Shack article 1 says that a blanket prohibition on serifed typefaces on screens is silly, which it is. But look at the two screenshots offered as counterexamples to "Only use serifs in print". One has a total of seven words in it. The other has a...
What will happen with existing LW posts and comments? I feel strongly that they should all stay accessible at their old URLs (though perhaps with new design).
All old links will continue working. I've put quite a bit of effort into that, and this was one of the basic design requirements we built the site around.
"Basic design requirements" seems like it's underselling it a bit; this was Rule 0 that would instantly torpedo any plan where it wasn't possible.
It's also worth pointing out that we've already done one DB import (lesserwrong.com has all the old posts/comments/etc. as of May of this year) and will do another DB import of everything that's happened on LW since then, so that LW moving forward will have everything from the main site and the beta branch.
Sounds great!
Is there anything important I missed
This analysis found that LW's most important issue is lack of content. I think there are two models that are most important here.
There's the incentives model: making it so good writers have a positive hedonic expectation for creating content. There's a sense in which an intellectual community online is much more fragile than an intellectual community in academia: academic communities can offer funding, PhDs, etc. whereas internet discussion is largely motivated by pleasure that's intrinsic to the activity. As a concrete example, the way Facebook lets you see the name of each person who liked your post is good, because then you can take pleasure in each specific person who liked it, instead of just knowing that X strangers somewhere on the internet liked it. Contrast with academia, which plods on despite frequently being hellish.
And then there's the chicken-and-egg model. Writers go where the readers are and readers go where the writers are. Interestingly, sometimes just 1 great writer can solve this problem and bootstrap a community: both Eliezer and Yvain managed to create communities around their writing single-handedly.
T...
Thank you for developing this.
I'm reminded of an annoying feature of LW 1.0. The search function was pretty awful. The results weren't even in reverse chronological order.
I'm not sure how important better search is, but considering your very reasonable emphasis on continuity of discussion, it might matter a lot.
Requiring tags while offering a list of standard tags might also help.
I'm hoping there will be something like the feature at ssc to choose the time when the site considers comments to be new. It's frustrating to not be able to recover the pink borders on new comments on posts at LW.
Firstly, well done on all your hard work! I'm very excited to see how this will work out.
Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.
I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.
I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.
It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.
I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).
Thank you, very much for making this effort! I love the new look of the site — it reminds me of http://practicaltypography.com/ which is (IMO) the nicest looking site on the internet. I also like the new font.
Some feedback, especially regarding the importing of old posts.
Firstly, I'm impressed by the fact that the old links (with s/lesswrong.com/lesserwrong.com/) seem to consistently redirect to the correct new locations of the posts and comments. The old anchor tag links (like http://lesswrong.com/lw/qx/timeless_identity/#kl2 ) do not work, but with the
I think adding a collection of the best Overcoming Bias posts, including posts like "you are never entitled to your own opinion" to the front page would be a great idea, and it might be better than putting a link to HPMOR (some users seem to believe that linking HPMOR on the front page may come across as puerile).
On StackExchange, you lose reputation whoever you downvote a question/answer; this makes downvoting a costly signal for displeasure. I like the notion, and hope it is included in the new site. If you have to spend your hard-earned karma to cause someone to lose karma, then it may discourage karma assassination, and ensure that downvotes are only used on content people have strong negative feelings towards.
##Pros
Thank you for doing this!
Not a comment on the overview, but on LW2.0 itself: are you intentionally de-emphasizing comment authorship by making the author names show up in a smaller font than the text of the comment? Reading the comments under the roadmap page, it feels slightly annoying that the author names are small enough that my brain ignores them instead of registering them automatically, and then I have to consciously re-focus my attention to see who wrote a comment, each time that I read a new comment.
I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!
Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bit...
My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content tha...
(a) Thanks for making the effort!
(b)
"I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation."
This won't work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion. I think the important thing to do is making toxic people (defined in a socially constructed way as people you don't want around) go away. Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.
"This won't work, for the same reason PageRank did not work"
I am very confused by this. Google's search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say "PageRank did not work"?
Given that, it seems equally valid to say "this will work, for the same reason that PageRank worked", i.e., we can also tweak the reputation algorithm as people try to attack it. We don't have as much resources as Google, but then we also don't face as many attackers (with as strong incentives) as Google does.
I personally do prefer a forum with karma numbers, to help me find quality posts/comments/posters that I would likely miss or have to devote a lot of time and effort to sift through.
I think votes have served several useful purposes.
Downvotes have been a very good way of enforcing the low-politics norm.
When there's lots of something, you often want to sort by votes, or some ranking that mixes votes and age. Right now there aren't many comments per thread, but if there were 100 top-level comments, I'd want votes. Similarly, as a new reader, it was very helpful to me to look for old posts that people had rated highly.
Curious as to why you think that LW2.0 will have a problem with gaming karma when LW1.0 hasn't had such a problem (unless you count Eugine, and even if you do, we've been promised the tools for dealing with Eugines now).
Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.
Ranking helps me know what to read.
The SlateStarCodex comments are unusable for me because nothing is sorted by quality, so what's at the top is just whoever had the fastest fingers and least filter.
Maybe this isn't a problem for fast readers (I am a slow reader), but I find automatic sorting mechanisms to be super useful.
This. SSC comments I basically only read if there are very few of them, because of the lack of karma; on LW even large discussions are actually readable, thanks to karma sorting.
As long as it's not anti-correlated with quality, it helps.
It doesn't matter if the top comment isn't actually the very best comment. So long as the system does better than random, I as a reader benefit.
Oli and I disagree somewhat on voting systems. I think you get a huge benefit from doing voting at all, a small benefit from doing simple weighted voting (including not allowing people below ~10 karma to vote), and then there's not much left from complicated vote weighting schemes (like eigenkarma or so on). Part of this is because more complicated systems don't necessarily have more complicated gaming mechanics.
There are empirical questions involved; we haven't looked at, for example, the graph of what karma converges to if you use my simplistic vote weighting scheme vs. an eigenkarma scheme, but my expectation is a very high correlation. (I'd be very surprised if it were less than .8, and pretty surprised if it were less than .95.)
I expect the counterfactual questions--"how would Manfred have voted if we were using eigenkarma instead of simple aggregation?"--to not make a huge difference in practice, altho they may make a difference for problem users.
Main benefits to karma are feedback for writers (both informative and hedonic) and sorting for attention conservation. Main costs are supporting the underlying tech, transparency / explaining the system, and dealing with efforts to game it.
(For example, if we just clicked a radio button and we had eigenkarma, I would be much more optimistic about it. As is, there are other features I would much rather have.)
If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages
Some questions about this (okay if you don't have answers now):
I'd love to see achieved the goal of an active rationalist-hub and I think this might be a method that can lead to it.
Ironically, after looking at the post you made on lesserwrong that combines various Facebook posts, Eliezer unknowingly demonstrates the exact issue: "because of that thing I wrote on FB somewhere" On one of his old LW posts, he would have linked to it. Instead, the explanation is missing for those who aren't up to date on his entire FB feed.
Thanks for the work that you've put into this.
We've actually talked a bit with Eliezer about importing his past and future facebook and tumblr essays to LW 2.0, and I think this is a plausible thing we'll do after launch. I think it will be good to have his essays be more linkable and searchable (and the people I've said this to tend to excitedly agree with me on this point).
(I'm Ben Pace, the other guy working full time on LW 2.0)
Please do this. This alone would be enough to get me to use and link LW 2.0, at least to read stuff on it.
UPDATE (Fri Sep 15 14:56:28 PDT 2017): I'll put my money where my mouth is. If the LW 2.0 team uploads at least 15 pieces of content authored by EY of a length at least one paragraph each from Facebook, I'll donate 20 dollars to the project.
Preferably in a way where I can individually link them, but just dumping them on a public web page would also be acceptable in strict terms of this pledge.
(As it happens, that particular post ("why you absolutely need 4 layers of conversation in order to have real progress") was un-blackholed by Alyssa Vance: https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/)
To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:
I notice that this picture doesn't seem to include link posts. Will those still exist?
We have link post functionality but I think we're trying to shift away from it, and instead more directly solve the problem of people-posting-to-other-blogs (both by making it a better experience to post things here on your personal section, and to make it possible to post things to your blog that are auto-imported into LW)
I will say that lesserwrong is already useful to me, and I'm poking around reading a few things. I haven't been on LessWrong (this site) in a long time before just now, and only got here because I was wondering where this "LesserWrong" site came from. So, at the very least, your efforts are reaching people like me who often read and sometimes change their behavior based on posts, but rarely post themselves. Thanks for the all work you did - the UX end of the new site is much, much better.
I've always been half-way interested in LessWrong. SlateStar, Robin Hanson, and Bryan Caplan have been favorite reading for a very long time. But every once in a while I'd have a look at the LessWrong, read something, and forget about it for months at a time.
After the rework I find this place much more appealing. I created a profile and I'm even commenting. I hope one day I can contribute. But honestly, I feel 200% better about just browsing and reading.
Great job.
What would make you personally use the new LessWrong?
Quality content. Quality content. And quality content.
Is there any specific feature that would make you want to use it?
The features which I would most like to see:
Wiki containing all or at least most of the jargon.
Rationality quotations all in one file alphabetically ordered by author of the quote.
Book reviews and topical reading lists.
Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas eve...
I've often faced frustration (I access LW from mobile) due to the "close" button being clicked (it is often not visible when typing in portrait mode (my phone is such that I can't see the comment while typing in landscape, and I'm used to the portrait keyboard)) resulting in me losing the entire comment. This is very demotivating, and quite frustrating. I hope that this is not a problem in Lesswrong 2.0, and hope that functionality for saving drafts of comments is added.
Two things I'd like to see:
1) Some sort of "example-pedia" where, in addition to some sort of glossary, we're able crowd-source examples of the concepts to build upon understanding. I think examples continue to be in short supply, and that's a large understanding gap, especially when we deal with concepts unfamiliar to most people.
2) Something similar to Arbital's hover-definitions, or a real-time searchable glossary that's easily available.
I think the above two things could be very useful features, given the large swath of topics we like to discuss, from cognitive psych to decision theory, to help people more invested in one area more easily swap to reading stuff in another area.
This sounds very promising. The UI looks like a site from 2017 as well (as opposed to the previous 2008 feel). The design is very aesthetically pleasing.
I'm very excited about the personal blog feature (posting our articles to our page is basically like a blog).
How long would the open beta last?
I think one big problem about using the Reddit Codebase was that while there was a lot of additional code development we couldn't simply copy the code over as changing the code to be about LW takes the editing of source code.
Given that you now published the code under an MIT license, I ask myself whether it would be good to to have a separate open source project for the basic engine behind the website that can be used by different communities.
The effective altruism forum also used a Reddit fork and might benefit from using the basic engine behind the webs...
I would favor the option to hide comments' scores while retaining their resultant organization (best/popular/controversial/etc). I have the sense that I'm biased toward comments with higher scores even before I've read them, which is counterproductive to my ability to evaluate arguments on their own merit.
LW2.0 doesn't seem to be live yet, but when it is, will I be able to use my 1.0 username and password?
On StackExchange upvotes and downvotes from accounts with less than 15 rep are recorded but don't count (presumably until the account gains more than 15 rep). LW may decide to set her bar lower (10 rep?) or higher (>= 20 rep?), but I think the core insight is very good and would be a significant improvement if applied to LW.
Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).
Hey Everyone!
This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.
With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).
We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money.
Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?
Here’s the rough structure of this post:
Why bother with LessWrong 2.0?
I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:
On LessWrong…
When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post.
I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here:
The Existing Discussion Around LessWrong 2.0
Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.
Here is a comment by Alexandros, on Anna’s post I quoted above:
I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”.
In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time.
Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved.
Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website:
At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy:
When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD.
So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members.
And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future).
And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there).
But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).
I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)
My models of LessWrong 2.0
I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems):
I.
The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.
Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old.
This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless.
To achieve this, we’ve built the following things:
And there are some more features the team is hoping to build in this direction, such as:
II.
The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge.
I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing.
The site structure:
To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:
The writing experience:
If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post.
If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions.
Meta
Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site.
Featured posts
In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community.
Meetups (implementation unclear)
There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out.
Shortform (implementation unclear)
Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented.
Why?
The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts.
The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage.
The karma system:
Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.
(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)
I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined.
III
The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve.
The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta).
Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.
The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.
How you can help and issues to discuss:
The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.
During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.
Here are some issues where I discussion would be particularly fruitful:
The closed beta can be found at www.lesserwrong.com.
Ben, Vaniver, and I will be in the comments!