Hi Anna,
Please consider a few gremlins that are weighing down LW currently:
Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.
the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com
Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)
Anyone want to join me in this, or else make a counterproposal?
Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.
Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.
I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).
I like the idea of granting domain ownership if we in fact go down the BDFL route.
I'll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the "villians" of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven't seen Vaniver say he is okay with such a role.
I'm concerned that we're only voting for Vaniver because he's well known
Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.
I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.
That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred.
Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.
On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:
For the Russian LessWrong slack chat we agreed on the following emoji semantics:
We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.
I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.
I do think it's important to have someone clearly "running the place". A BDFL, if you like.
Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.
Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.
Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.
I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.
doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).
This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.
There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argu...
I think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")
I am working on a project with this purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I'm building:
https://github.com/raymestalez/nexus
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!
Strong writers enjoy their independence.
This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though -- if more than a few can be convinced to do it.
Speaking as a writer for different communities, there are 2 problems with this:
Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.
"An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.
The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.
My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold.
In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don't have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn't a good fit for G Wiley's budding rationalist community blog, let alone old LW.
I guess what I'm saying is that there's a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) -> cross posting -> links with centralized discussion -> blogroll (loosest). Any point on the scale could work, but it's important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they're in or out.
On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.
I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.
Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.
The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxe...
At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn't work, and setting up new user accounts for testing purposes was a huge pain.
I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn't there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren't quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.
The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
Thanks for trying to work on that one!
setting up new user accounts for testing purposes was a huge pain.
This seems like the sort of thing that we should be able to include with whatever makes the admin account that's already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn't know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.
These sorts of usability improvements--a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don't feel at all bad about changing the goal from "I'm going to close out issue X" to "I'm going to make it not as painful to have test accounts," since those sorts of improvements will lead to probably more than one issue getting closed out.
I'm new and came here from Sarah Constantin's blog. I'd like to build a new infrastructure for LW, from scratch. I'm in a somewhat unique position to do so because I'm (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.
Here is how I envision the basic architecture of this project:
I w...
Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not -- there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let's say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.
I am not saying that paying me for this job is a rational thing to do; let's just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)
Maybe it was a mistake that I didn't mention this option sooner... but hearing all the talk about "some volunteers doing it for free in their free time" made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can't change the past.)
I certainly couldn't do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patc...
I really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really couldn't agree more.
There's an issue that I expect will be closed sometime this week that I think will round out the suite of technical tools that will give moderators the edge over trolls. Of course, people are intelligent and can adapt, so I'm not going to hang up a Mission Accomplished banner just yet.
I predict that whatever is in this drop will not suffice. It will require at minimum someone who has both significant time to devote to the project, and the necessary privileges to push changes to production.
I applaud this and am already participating by crossposting from my blog and discussing.
One thing that I like about using LW as a home base is that everyone knows what it is, for good and for ill. This has the practical benefit of not needing further software development before we can get started on the hard problem of attracting high-quality users. It also has the signaling benefit of indicating clearly that we're "embracing our roots", including reclaiming the negative stereotypes of LessWrongers. (Nitpicky, nerdy, utopian, etc.)
I am unusual in this community in taking "the passions" really seriously, rather than identifying as being too rational to be caught up in them. One of my more eccentric positions has long been that we ought to be a tribe. For all but a few unusual individuals, humans really want to belong to groups. If the group of people who explicitly value reason is the one group that refuses to have "civic pride" or similar community-spirited emotions, then this is not good news for reason. Pride in who we are as a community, pride in our distinctive characteristics, seems to be a necessity, in a cluster of people who aspire to do bet...
In short, because I think tribes are the natural environments in which humans live, and that ignoring that fact produces unhappy and dysfunctional humans.
I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:
"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)
That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:
Pretty much any successful subreddit, even small...
a proactive, responsive admin/moderation team
Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:
1) remove individual comments; and
2) ban individual users.
It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can't even revert the downvotes made by those accounts. You can't detect the sockpuppets that don't post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn't like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What's the point? It will cost Eugine about 10 seconds to create a new one.
(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like "yeah, maybe the mods are too poweful, we need to stop them", and you keep banging your head against the wall in frustration, wishing you actually had a fraction of thos...
Was including tech support under "admin/moderation" - obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.
That's okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn't change anything, such as: appoint new or more moderators. (I am not saying more help wouldn't be welcome, it's just that without better access to data, they also couldn't achieve much.)
Wow, that is a pretty big issue. Thank you for mentioning this.
Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don't realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.
What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision "user xyz123 is abusing the voting mechanism" or "user xyz123 is a sockpuppet for user abc789", describe my case to other mods, and only after getting their agreement I would learn who the "user xyz123" actually is.
(But of course, getting the database without anonymization -- if that would be faster -- would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that's kinda the point that I don't know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of "security by obscurity", to avoid Eugine adjusting to my algorithms. (For example...
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven't thought about it much.
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness... the user will simply notice that X upvotes them often and Y downvotes them often... they will start liking X and disliking Y... they will start getting pleasant feelings when looking at comments written by X ("my friend is writing here, I feel good") and unpleasant feelings when looking at comments written by Y ("oh no, my nemesis again")... and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other peop...
It's not actually obvious to me that downvotes are even especially useful. I understand what purpose they're supposed to serve, but I'm not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers - I've heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Alternative, go with the Hacker News model of only enabling downvotes after you've accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off 'comment score below threshold' hiding, etc)
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook.
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
Actual spam could just be reported rather than downvoted
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying "this comment is unnecessarily rude", which would also effectively halve the number of likes for the purpose of comment sorting.)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your "fans" vs "enemies" are, you will inevitably try to play to your "fans" and develop dislike for your "enemies." I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a "social media" site (I have resolved to never use social media again) is "is there a like mechanic where I can see who liked me?" If votes on LW were public, by that rule, I'd have to quit.
the tech support doesn't give a fuck, and will cite privacy concerns when you ask them for more direct access to the database
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? ...Why do they then get their contract renewed?
The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.
Yeah, it's a bit of "don't look a gift horse in the mouth" situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly... what are you going to do? It's not like you can threaten to fire them, right?
In hindsight, I did a few big mistakes there. I didn't call Eliezer to have an open debate about what exactly is and isn't in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again... or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation... until I gradually developed a huge "ugh field" around the whole topic... and wasted a lot of time... and then other people took the role and had to start from the beginning again.
I strongly agree with this sentiment, and currently Arbital's course is to address this problem. I realize there have been several discussions on LW about bringing LW back / doing LW 2.0, and Arbital has often come up. Up until two weeks ago we were focusing on "Arbital as the platform for intuitive math explanations", but that proved to be harder to scale than we thought. We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along. We are going to need innovation and experimentation both on the software and the community levels, but I'm looking forward to the challenge. :)
I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).
If half-hearted attempts are doomed (plausible), or more generally we're operating in a region where expected returns on invested effort are superlinear (plausible), then it might be best to commit hard to projects (>1 full-time programmer) sequentially.
Successful conversations usually happen as a result of selection circumstances that make it more likely that interesting people participate. Early LessWrong was interesting because of the posts, then there was a phase when many were still learning, and so were motivated to participate, to tutor one another, and to post more. But most don't want to stay in school forever, so activity faded, and the steady stream of new readers has different characteristics.
It's possible to maintain a high quality blog roll, or an edited stream of posts. But with comments, the problem is that there are too many of them, and bad comments start bad conversations that should be prevented rather than stopped, thus pre-moderation, which slows things down. Controlling their quality individually would require a lot of moderators, who must themselves be assessed for quality of their moderation decisions, which is not always revealed by the moderators' own posts. It would also require the absence of drama around moderation decisions, which might be even harder. Unfortunately, many of these natural steps have bad side effects or are hard to manage, so should be avoided when possible. I expect the problem can b...
Quick note: Having finally gotten used to using discussion as the primary forum, I totally missed this post as a "promoted" post and would not have seen it if it hadn't been linked on Facebook, ironically enough.
I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.
I think this is completely correct, and have been thinking along similar lines lately.
The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.
The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.
Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all "arguments you want the community to be aware of" to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise "single point of coordination-marked" posts to LW.)
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.
Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)
These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.
Regarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work (my review) talks more about it. I'm not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent is a movement that deals with this topic and has some good content/resources.
I think it's important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you're not internetting mindlessly out of impulse. If willpower is an issue, try Self Control, or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs.
I'm no expert, but suspect that LW could do a better job designing for Time Well Spent.
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.
One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:
A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.
That shows up in an inbox of everyone in the SR until one of them clicks an "I've got this" button.
The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.
The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it's draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they'r...
I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.
I don't think that a reboot/revival of LW necessarily has to consist entirely of the people who were in the community before. If we produce good stuff, we can attract new people. A totally new site with new branding might get rid of some of the negative baggage of the past, but is also less likely to get off the ground in the first place. Making use of what already exists is the conservative choice.
I hear you as saying that people here should focus on learning rather than leadership. I think both are valuable, but that there's a lack of leadership online, and my intuition is to trust "forward momentum", carrying something forward even if I do not think I am optimally qualified. He who hesitates is lost, etc.
I see Anna making the same complaint that you yourself have made a few times: namely, that most online discussions are structured in a way that makes the accumulation of knowledge difficult. (My explanation: no one has an incentive to fix this.)
Is the fact that economists mostly cite each other evidence of "cultish in-group favoring biases"? Probably to some degree. But this hasn't fatally wounded economics.
"It is dangerous to be half a rationalist."
It is dangerous to half-arse this and every other attempt at recovering lesswrong (again).
I take into account the comments before mine which accurately mention several reasons for the problems on lw.
The codebase is not that bad. I know how many people have looked at it; and it's reasonably easy to fix it. I even know how to fix it; but I am personally without the coding skill to implement the specific changes. We are without volunteers willing to make changes; and without funds to pay someone to do them. Trust me. I collated all comments on all of the several times we have tried to collate ideas. We are unfortunately busy people. Working on other goals and other projects.
I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.
A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicit...
I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.
So: this is subtle. But to my mind, the main issue isn't that ideas won't mostly-percolate. (Yes, lots of folks seem to be referring to Crony Beliefs. Yes, Molloch. Yes, etc.) It's rather that there isn't a process for: creating common knowledge that an idea has percolated; having people feel empowered to author a reply to an idea (e.g., pointing out an apparent error in its arguments) while having faith that if their argument is clear and correct, others will force the original author to eventually reply; creating a common core of people who have a common core of arguments/analysis/evidence they can take for granted (as with Eliezer's Sequences), etc.
I'm not sure how to fully explicitly model it. But it's not mostly about the odds that a given post will spread (let's call that probability "p"). It's more abou...
I don't think you can say both
The codebase is not that bad.
and
I am personally without the coding skill [...]
If I don't have the skills to fix a codebase, I'm pretty handicapped in assessing it. I might still manage to spot some bad things, but I'm in no shape to pronounce it good, or "not that bad".
It's true that articles pass around the rationalist network, and if you happen to be in it, you're likely to see some such articles. But if you have something that you'd specifically want the rationalist community to see, and you're not already in the network, it's very hard.
Some time back, I had a friend ask me how to promote their book which they thought might be of interest to the rationalist community. My answer was basically "you could start out by posting about it on LW, but not that many people read LW anymore so after that I can help you out by leveraging my position in the community". If they didn't know me, or another insider, they'd have a lot harder time even figuring out what they needed to do.
"The rationalist network" is composed of a large number of people and sites, scattered over Tumblr blogs, Facebook groups and profiles, various individual blogs, and so on. If you want to speak to the whole network, you can't just make a post on LW anymore. Instead you need to spend time to figure out who the right people are, get to know them, and hope that you either get into the inner circle, or that enough insiders agree with your message and take up sprea...
I'm disappointed that Elo's comment hasn't gotten more upvotes
I think it's got rather a lot of upvotes. It's also got rather a lot of downvotes. I suspect they are almost all from the same person.
Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.
Prediction: If they do, we will see a substantial pickup in discussion here. If they don't, we won't.
People go where the content is. The diaspora left LW a ghost town not because nobody liked LW but because all the best content -- which is ever and always created by a relatively small number of people -- went elsewhere. I read SSC, and post on SSC, not because it is better than LW (it's not, its interface makes me want to hit babies with concrete blocks) but because that's where Yvain writes. LW's train wreck of a technical state is not as much of a handicap as it seems.
I like LW-ish content, so I approve of this effort -- but it will only work to the extent that the Royals return.
Thanks for addressing what I think is one of the central issues for the future of the rationalist community.
I agree that we would be in a much better situation if rationalist discussion was centralized and that we are instead in a tragedy of the commons - more people would post here if they knew that others would. However, I contend that we're further from that desired equilibrium that you acknowledge. Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:
The incentive that pushes in our fav...
Thoughts on RyanCarey's problems list, point by point:
Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:
Not sure all of them are "problems", exactly. I agree that incentive gradients matter, though.
Comments on the specific "problems":
1 Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
Insofar as 1 is true, it seems like a genuine and simple bug that is probably worth fixing. Matt Graves is I believe the person to talk to if one has ideas or $ to contribute to this. (Or the Arbital crew, insofar as they're taking suggestions.)
2 Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. [snip]
The extent to which this is a bug depends on the extent to which posts are aimed at "going viral" / getting shared. If our aim is intellectual generativity, then we do want to attract the best minds of the internet to come think with us, and that does require sometimes having posts g...
(ii) seems good, and worth adding more hands and voices to; it seems to me we can do it in a distributed fashion, and just start adding to LW and going for momentum, though.
sarahconstantin and some others have in fact been doing something like (ii), and was I suspect a partial cause of e.g. this post of mine, and of:
By paulchristiano:
By Benquo:
By sarahconstantin:
Efforts to add to (ii) would I think be extremely welcome; it is a good idea, and I may do more of it as well.
If anyone reading has a desire to revitalize LW, reading some of these or other posts and adding a substantive (or appreciative) comment is another way to encourage thoughtful posting.
I also support (ii) and have been trying to recruit more good bloggers.
I'll note that good writers tend to be low on "civic virtue" -- creative work tends to cut against that as a motivation. I'm still trying to think of good ways to smooth the incentive gradient for writers.
One possibility is to get some people to spend a weekend together -- rent a place in Big Sur or something -- and brainstorm/hype up some LW-specific ideas together, which will be posted in real time.
This is a nontrivial cost. I'm considering it myself, and am noticing that I'm a bit put off, given that some of my (loyal and reflective) readers/commenters are people who don't like LW, and it feels premature to drag them here until I can promise them a better environment. Plus, it adds an extra barrier (creating an account) to commenting, which might frequently lead to no outside comments at all.
A lighter-weight version of this (for now), might be just linking to discussion on LW, without disabling blog comments.
There are lots of diverse opinions here, but you are not going to get anywhere just by talking. I recommend you do the following:
To say it in a different way: success or failure depends much more on building and empowering a small group of dedicated individuals, than on getting buy-in from a large diffuse group of participants.
Brian Tomasik's article Why I Prefer Public Conversations is relevant to
I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed). Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort. (At least if we can build up toward in fact having a single locus.)
I might have missed it, but reading through the comment thread here I don't see prominent links to past discussions. There's LessWrong 2.0 by Vaniver last year, and, more recently, there is LessWrong use, successorship, and diaspora. Quoting from the section on rejoin conditions in the latter:
A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?
(links to rejoin condition write-ins)
Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.
The impression I form based on this is that the main blocker to LessWrong revitalization is people writing sufficiently attractive posts. This seems to mostly agree with the emerging consensus in the comments, but the empirical backing from the survey is nice. Also, it's good to know that software or interface improvements aren't a big blocker.
As for what's blocking content creators from contributing to LessWrong, here are a few hypotheses that don't seem to have been given as much attention as I'd like:
I compiled some previous discussion here, but the troll downvoted it below visibility (he's been very active in this thread).
Crazy idea to address point #2: What if posts were made anonymously by default, and only became nonymous once they were upvoted past a certain threshold? This lets you take credit if your post is well-received while lessening the punishment if your post is poorly received.
It it well known that the best way to get teh internets to explain things to you is not to ask for an explanation, but to make a confident though erroneous claim.
I've noticed you using this strategy in the past. It makes me frustrated with you, but I want to uphold LW's norms of politeness in conversation, so I grit my teeth through the frustration and politely explain why you're wrong. This drains my energy and makes me significantly less enthusiastic about using LW.
Please stop.
It could also be a good way for the Internets to give up on trying to talk in a forum where you are around.
Because we're talking about the quality of discussion on LW and how to encourage people to post more good stuff. Whether or not you're OK with people ignoring your trollishness, trollishness lowers the quality of discussion and discourages people from posting. If you persist at it, you are choosing personal gain (whether provocation or learning stuff) over communally beneficial norms. And you're not "lil' ol' me" when you're in top 5 of commentors month in and month out.
"Feel free to ignore me" IS sealioning, because when people react to you in a way you didn't want (for example, they get angry or frustrated) you accept no blame or responsibility for it. The first comment I got to a post about empathy and altruism was you telling me that my recommendation leads to kulaks, ghettos and witch burning (I'm being uncharitable, but not untruthful). If I am then discouraged from posting new stuff, will you say that it's entirely my fault for being too sensitive and not ignoring you?
I know that there have been several attempts at reviving Less Wrong in the past, but these haven't succeeded because a site needs content to succeed and generating high quality content is both extremely hard and extremely time intensive.
I agree with Alexandros that Eliezer's ghost is holding this site back - you need to talk to Eliezer and ask if he would be willing to transfer control of this site to CFAR. What we need at the moment is clear leadership, a vision and resources to rebuild the site.
If you produced a compelling vision of what Less Wrong should become, I believe that there would be people would be willing to chip in to make this happen.
EDIT: The fact that this got promoted to main seems to indicate that there is a higher probability of this working than previous attempts at starting this discussion.
I agree with your comments on small intellectually generative circles and wonder if the optimal size there might not be substantially smaller than LW. It's my sense that LW has been good for dissemination, but most of the generation of thoughts has been done in smaller IRL circles. A set of people more selected for the ability and will to focus on the problem you describe in 1-3, if gathered in some internet space outside LW, might be able to be a lot more effective.
I think we need to put our money and investment where our mouths are on this. Either Less Wrong (or another centralized discussion platform) are very valuable and worth tens of thousands of dollars in investment and moderation, or they are not that important and not worth it. It seems that every time we have a conversation about Less Wrong and the importance of it, the problem is that we expect everyone to do things on a volunteer basis and things will just magically get going again. It seems like Less Wrong was going great back when there was active and constant investment in it by MIRI and CFAR, and once that investment stopped things collapsed.
Otherwise we are just in a situation like that of Jaguar with the cupholders, where everyone is posting on forums for 10 years about how we need cupholders, but there is no one whose actual, paid job is to get cupholders in the cars.
I think I disagree with your conclusion here, although I'd agree with something in its vicinity.
One of the strengths of a larger community is the potential to explore multiple areas in moderate amounts of depth. We want to be able to have detailed conversations on each of: e.g. good epistemic habits; implications of AI; distributions of cost-effectiveness; personal productivity; technical AI safety; ...
It asks too much for everyone to keep up with each of these conversations, particularly when each of them can spawn many detailed sub-conversations. But if they're all located in the same place, it's hard to browse through to find the parts that you're actually trying to keep up with.
So I think that we want two things:
For the first, I find myself thinking back to days of sub-forums on bulletin boards (lack of nested comments obviously a big problem there). That way you could have the different loci gathered together. For the second, I suspect careful curation is actually the right way to identify this content, but I'm not sure what the best way to set up infrastructure for this is.
It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.
Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of remarks with discourse that affirmed the worth of LessWrong posters would incentiveize more collaboration on this site.
I'm not sure if this implies that we should shift to a platform that doesn't have the taint of "LessWrong is dead" associated with it. Maybe we'll be ok if a selection of contributors who are highly regarded in the community begin or resume posting on the site. Or, perhaps this implies that the content creators who come to whatever locus of discussion is chosen should be praised for being virtuous by contributing directly to a central hub of knowledge. I'm sure that you all can think of even better ideas along these lines.
Here's an opinion on this that I haven't seen voiced yet:
I have trouble being excited about the 'rationalist community' because it turns out it's actually the "AI doomsday cult", and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk - like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively...
Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.
I don't think that it's true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.
But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.
[This isn't specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]
As someone who is actively doing something in this direction at Map and Territory, a couple thoughts.
A single source is weak in several ways. In particular although it may sound nice and convenient from the inside, no major movement that affects a significant portion of the population has a single source. It may have its seed in a single source, but it is spread and diffuse and made up of thousands of voices saying different things. There's no one play to go for social justice or neoreaction or anything else, but there are lots of voices saying lots of thi...
100% centralization is obviously not correct, but 100% decentralization seems to have major flaws as well–for example, it makes discovery, onboarding, and progress in discussion a lot harder.
On the last point: I think the LW community has discovered ways to have better conversations, such as tabooing words. Being able to talk to someone who has the same set of prerequisites allows for much faster, much more interesting conversation, at least on certain topics. The lack of any centralization means that we're not building up a set of prerequisites, so we're stuck at conversation level 2 when we need to achieve level 10.
We have lately ceased to have a "single conversation" in this way.
Can we hope to address this without understanding why it happened?
What are y'all's theories of why it happened?
There has been lots of discussion of this. This is probably at least the tenth thread on why/how to fix LW.
http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/
http://lesswrong.com/r/discussion/lw/nf2/lesswrong_potential_changes/
http://lesswrong.com/lw/n0l/lesswrong_20/
http://lesswrong.com/lw/n9b/upcoming_lw_changes/
https://wiki.lesswrong.com/index.php?title=Less_Wrong_2016_strategy_proposal
http://lesswrong.com/lw/nkw/2016_lesswrong_diaspora_survey_results/
http://lesswrong.com/lw/mbd/lesswrong_effective_altruism_forum_and_slate_star/
http://lesswrong.com/lw/mcv/effectively_less_altruistically_wrong_codex/
http://lesswrong.com/lw/m7g/open_thread_may_18_may_24_2015/cdfe
http://lesswrong.com/lw/kzf/should_people_be_writing_more_or_fewer_lw_posts/
http://lesswrong.com/lw/not/revitalizing_less_wrong_seems_like_a_lost_purpose/
http://lesswrong.com/lw/np2/revitalising_less_wrong_is_not_a_lost_purpose/
http://lesswrong.com/lw/o7b/downvotes_temporarily_disabled/
http://lesswrong.com/lw/oho/thoughts_on_operation_make_less_wrong_the_single/
(These are just the ones I recall, and they don't include all the posts Eugene generated or the discussion in Slack.)
One thought that occurs to me re: why this discussion tends to fail, and why Less Wrong has trouble getting things done in general, is the forum structure. On lots of forums, contributing to a thread will cause the thread to be "bumped", which gives it additional visibility. This means if a topic is one that many people are interested in, you can have a sustained discussion that does not need to continually be restarted from scratch. Which creates the possibility of planning out and executing a project. (I imagine the linear structure of an old school forum thread is also better for building up knowledge, because you can assume that the person reading your post has already read all the previous posts in the thread.)
A downside of the "bump" mechanic is that controversial threads which attract a lot of comments will receive more attention than they deserve. So perhaps an explicit "sticky" mechanic is better. (Has anyone ever seen a forum where users could vote on what posts to "sticky"?)
#1: the general move of the internet away from blogs and forums and towards social media.
In particular, there seems to be a mental move that people make, that I've seen people write about quite frequently, of wanting to avoid the more "official"-seeming forms of online discussion, and towards more informal places. From blogging to FB, from FB to Tumblr and Twitter, and thence to Snapchat and other stuff I'm too old for. Basically, people say that they're intimidated to talk on the more official, public channels. I get a sense of people feeling hassled by unfriendly commenters, and also a sense of something like "kids wanting to hang out where the grownups aren't", except that the "kids" here are often adults themselves. A sense that you'll be judged if you do your honest best to write what you actually believe, in front of people who might critique it, and so that it's safer to do something that leaves you less exposed, like sharing memes.
I think the "hide, go in the darkness, do things that you can't do by daylight" Dionysian kind of impulse is not totally irrational (a lot of people do have judgmental employers or families) but it's really counterproductive to discourse, which is inherently an Apollonian, daylight kind of activity.
You can already do this. If you click on a user's profile, there will be a little box in the top right corner. Click on the button that says "add to friends" there. When you "friend" someone on LessWrong, it just means you follow them. If you go to www.lesswrong.com/r/friends, there's a feed with submissions from only the other users you're following.
Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.
One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.
But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.
As for why this is a problem for LW specifically, I would probably point at age. The full explanation is too long for this comment, and so may become a post, but the basic idea is that 'career consolidation' is a developmental task that comes before 'generativity', or focusing mostly on shepherding the next generation, which comes before 'guardianship', or focusing mostly on preserving the important pieces of the past.
The community seems to have mostly retracted because people took the correct step of focusing on the next stage of their development, but because there hadn't been enough people who had finished previous stages of their development, we didn't have enough guardians. We may be able to build more directly, but it might only work the long way.
To expand on what sarahconstantin said, there's a lot more this community could be doing to neutralize status differences. I personally find it extremely intimidating and alienating that some community members are elevated to near godlike status (to the point where, at times, I simply cannot read i.e. SSC or anything by Eliezer — I'm very, very celebrity-averse).
I've often fantasized about a LW-like community blog that was entirely anonymous (or nearly so), so that ideas could be considered without being influenced by people's perceptions of their originators (if we could solve the moderation/trolling problem, that is, to prevent it from becoming just another 4chan). A step in the right direction that might be a bit easier to implement would be to revamp the karma system so that the number of points conferred by each up or down vote was inversely proportional to the number of points that the author of the post/comment in question had already accrued.
The thing is, in the absence of something like what I just described, I'm skeptical that it would be possible to prevent the conversation from quickly becoming centered around a few VIPs, with everyone else limited to commenting on those individuals' posts or interacting with their own small circles of friends.
One interesting thing is that at one point post-Eliezer, there were two "rising stars" on LW who were regularly producing lots of fascinating content: lukeprog and So8res. Both stopped regularly posting here some time after they were recruited by MIRI and their priorities shifted.
This feels like a good start but one that needs significant improvement too.
For instance, I'm wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn't relevant for LW - what had a shot at being promoted - and the few posts I wrote here had a tentative aspect to them because of this. I can't yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main.
If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, "the LessWrong community", with a side dish of AI safety and startup founder psychology. This doesn't feel aligned with "refining the art of human rationality", it makes LessWrong feel like more of a corporate blog.
Agree that a lot more clarity would help.
Assuming Viliam's comment on the troll is accurate, that's probably sufficient to explain the decline: http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2n
I disagree with #1 and #2, and I don't identify as a rationalist (or for that matter, much as a member of any community), but I think it is true that Less Wrong has been abandoned without being replaced by anything equally good, and that is a sad thing. In that sense I would be happy to see attempts to revive it.
I definitely disagree with the comment that SSC has a better layout, however; I think people moved there because there were no upvotes and downvotes. The layout for comments there is awful, and it has a very limited number of levels, which after a few comments prevents you from responding directly to anything.
Eh, one thing I've noticed about SSC is a number of deeply bad comments, which I don't think I've seen on LW. Yes, there are also good comments, but I can imagine someone five years ago looking at the state of SSC commenting now and saying "and this is why we need to ban politics" instead of seeing it as a positive change.
SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.
First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.
Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is g...
I'm up for doing this, because I think you're right; I notice that commenting/posting on LessWrong has less draw for me than it did in 2011/2012, but it's also much less intimidating, which seems useful.
I oversee a list of Facebook groups so if there's any way I can help support this, please let me know and your arguments: https://www.facebook.com/EffectiveGroups/
Here's some intuitions I have:
It will be really hard to work against the network effects and ease of Facebook but I think its social role should be emphasised instead. Likewise for EA Forum but maybe this can take on a specific role like being more friendly to new people / more of a place to share information and do announcements.
If you position LW as setting the gold standard of conversation
I realize I haven't given a direct answer yet, so here it is: I'm in, if I'm wanted, and if some of the changes discussed here take place. (What it would take to get me onboard is, at the least, an explicit editorial policy and people in charge of enforcing it.)
Others have made these points, but here are my top comments:
A interesting discussion on HN -- not about LW but about Reddit -- which still offers useful commentary about what HN people expect from a "conversational locus".
Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.
My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.
Finally, a reminder on Less Wrong history, which sugge...
My 2 cents. We are not at a stage to have a useful singular discussion. We need to collect evidence about how agents can or cannot be implemented before we can start to have a single useful discussion. Each world view needs their own space.
My space is currently my own head and I'll be testing my ideas against the world, rather than other people in discussion. If they hold up I'll come back here.
I've known about Less Wrong for about two full years. A few weeks ago I started coming here regularly. A week ago I made an account -- right before this post and others like it.
My own poetic feeling is there is a change in the winds, and the demand for a good community is growing. SSC has no real community. Facebook is falling apart with fake news and awful political memes. People are losing control of their emotions w.r.t politics. And calm scientific rationalist approaches are falling apart.
I deactivated my FB, made an account here, and have done my best...
I am working on a project with the similar purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
If you find it interesting and can offer some feedback - I would really appreciate it!
I think the Less Wrong website diminished in popularity because of the local meetups. Face to face conversation beats online conversation for most practical purposes. But many Less Wrongers have transitioned to being parents, or have found more professional success so I'm not sure how well the meetups are going now. Plus some of the meetups ban members rather than rationally explaining why they are not welcome in the group. This is a horrible tactic and causes members to limit how they express themselves... which goes against the whole purpose of rationality meetups.
The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.
Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle. This sounds like hubris, but it is at this point at least partially a matter of track record.[1]
To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]
One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).
One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.
We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)