11

by Ruby
6 min read

11

Review

This is an internal document written for the LessWrong/Lightcone teams. I'm posting as "available by link only" post to share in a limited way, because I haven't reviewed the post for making sense to a broader audience, or thoroughly checked for sensitive things.

 

With the Great Influx (sometimes called 10-year September or 5-year September by some) upon us, I think there’s a failure we could fall into that I hope to avoid. Some (not us) might relate to goal as simply not losing what we have. Not letting the discourse quality drop below what we’ve got now.
 

That would not be ambitious enough. Even as we are trying to not slip backwards, I want us to also being trying to forge forwards. Perhaps even the present challenges will spur us to become stronger and improve the quality of LessWrong to unprecedented heights.

My evocative terms for these objectives, loosely, is the wall and the spire. We must build Great Walls to protect us against the coming hordes, perhaps many walls even concentric and with gates and chambers between them, allowing through some who will benefit progress but without letting us be overwhelmed. 

But within the walls, we must construct a great spire that reaches unto the heavens to elevate the discourse to novel heights. The wall is not a wall for it’s own sake, but to protect what we create within.

Of course, we work in the online realm with digital matter. We can effortlessly reuse the same components in both the Wall and the Spire, to stretch this metaphor a little. That is to say, I think the same activities might benefit both goals, in some cases. In this way, we are more fortunate the literal Earth-bound.


 

Problems

I have many solutions in mind, but to avoid getting anchored on them, I shall list a number of the challenges we face here. We shall need to triage among them.

“Lowered [Perceived] Quality” (“SNR”?)

I don’t want to lock in any precise notion or metric here, but something in the direction of people feeling that when they come to LessWrong now compared to in the past, there’s a lot more crappy stuff they don’t want to look at. This can be a problem even if in absolute terms there’s more great content than before.

So possibly this problem is “lower perceived/experienced quality of LessWrong”, and I expect it to be a dangerous quantity with bad feedback loops. If the experienced quality drops, some of the better people stop visiting/posting/commenting, the perceive quality drops further, and so on.

I guess another way of putting it is that lower perceived quality can lead to lower absolute quality (in terms of distribution and volume of quality content) leading to yet lower perceived quality.

 

Comment Visibility Problem

Related to the above, I think the comment visibility problem is one of the harder challenges we face. Bad posts can be made to appear less on the frontpage (and with some extra design, be hidden more from other places too such as All Posts).

What to do with comments though? Authors in particular tend to read many/most comments on their posts and strongly dislike bad comments. Bad commenters can also show up in otherwise good threads and derail them and distract people. And so on, I don’t need to belabor the point.

I think we need to tackle this.


 

The Maintenance Burden

Mostly until recently daily maintenance (exclude more time consuming bug fixes) has been about 1.5 person hours (or 30 min of the three of us working). We’re noticing that that now isn’t enough, and it’s starting to feel like a lot and be pretty distracting from other main foci.

Somehow, I want to get this burden down. (Possible solutions in solutions section.)


 

Onboarding / New User Flow

All these new people. What happens to them when they join the site, either for passive reading or active participation?

A top goal is preventing bad new users from degrading things, but also we want a process that does onboard new great people to the conversation.


 

Relatedly, something I’ve been less keen on in the past but am warming to is the “Beacon of Sanity” role for LW, in particular beacon of sanity around AI. I think we are going to be painted as somewhat “those doomer weirdos”, but also LW will be recognized as having a degree of expertise on AI due to having been interested in the topic for many many years and having lots of material about AI alignment. All that to say, people will be interested in what we have to say, and might be worth thinking about their experience when they come to the site.
 

Answering the 101

I’ve previously argued with Ray that really I want LessWrong to be the place where frontier progress happens, not the place where the general public comes to get very basic questions about AI answered, people who are not going to be advancing the Alignment field. The reason to not want the the later people is it then becomes a challenge to ensure they don’t seep into the rest of the site and erode quality.

Without specifying a solution here, I do think we want some user flow for people showing up with 101 questions. Some of those people might proceed to become advanced nuanced thinkers who can contribute, many won’t. All else equal, I’d like people to get 101 questions answered and we should figure out how that happens. If it’s something like we refer them to Stampy, then we gotta build that. But this is a problem statement, not a solution.
 

Not losing the spirit of LessWrong to AI

You know what I’m talking about. The whispering Rationality in your ear. The having Bayes in your bones. The kind of world-encompassing curiosity that you just want to understand what’s going. The spirit of play that means you’re not just focused on the immediate work task. Being Ravenclaw. All of this wholly inadequate 20-second putting things onto the page. But you know what I mean by now (LW team members, others haven’t had this been an ongoing motif.)
 

So we have two big tasks:

  • Not lose the true spirit of LessWrong to all the AI content
  • Not let the quality of AI conversation deteriorate
    • (the other conversations too, but realistic people are going to amass for the AI stuff)
       

Culture

Very related to the previous point, we want to ensure LessWrong has a good culture, particular a good epistemic culture with good norms. Also includes things like background knowledge that you can assume and build upon. I think as people come in, we’re going to need barriers to keep who don’t yet have the culture from diluting things, and ways to get more people to have the culture.
 

Visibility into what’s going on

It’s clear that stuff is changing already, and we want to do stuff that changes how things are changing, but we really ought to have some good visibility into it. The graphs I’ve shared here plus our experience as site users and moderators are a start, but they’re not a robust dashboard we can keep looking at.

Also as much as a bit part of what we care about is experience, I think we need to be routinely and robustly doing user interviews. Sorry, skipping ahead into solution space.

I care about putting systems in place so we know where things are at. I suspect this matters more when things are shifting quickly and gut impressions will be slow to catch up.
 

Attention on the good content (and lack of attention on bad content)

This is very related to perceive quality item listed above. With any website that displays content (comments, posts, tags, etc) there’s the challenge of ensuring that people see the stuff they want to see and ought to see.

I’m giving myself some points for noting years ago that one day this would be a much harder problem for us. If there are 10 new posts/day, it’s pretty feasible for users to look all of them over and find the ones they want to read. As there are 20, 30, 50 posts/day, users aren’t going to look over all of them, and we need to a lot more do work of presenting them with the good stuff. (But also make sure other stuff gets seen at all and has a chance to be recommended as good.)

Same for comment threads. As these get longer, we want to focus good people’s attention on the good content there.

And if we’re creating great wiki-tag content, also, how does it get surfaced in the right ways?

With the Great Influx afoot, the time is come to tackle this one too.

Figure out what to do with the Alignment Forum

The Alignment Forum is connected to LessWrong and as the Great [AI] Influx happens and also many people start hitting the Alignment Forum, we need to figure out the relationship there. What’s going to happen with the Alignment Forum? How does it continue (or not?) to be connected to LessWrong

Other

The above aren’t all the things we ought be tackling, likely I’m missing some key ones too. Or there are alternative lenses on the same items that highlight aspects that are important I’m not getting at. Looking forward to others pointing out omissions.

11

New Comment
1 comment, sorted by Click to highlight new comments since:

"This is an internal document written for the LessWrong/Lightcone teams. I'm posting as "available by link only" post to share in a limited way, because I haven't reviewed the post for making sense to a broader audience, or thoroughly checked for sensitive things."

 

This post appears in search results and to people who have followed you on LW. I didn't read it, but you may want to take it down if this is unwanted enough.