Over a year ago, I wrote about Public Archipelago and why it seemed important for LessWrong. Since then, nothing much has come of that. It seemed important to acknowledge that. There are more things I think are worth trying, but I've updated a bit that maybe the problem is intractable or my frame on it might be wrong.
The core problems Public Archipelago was aiming to solve are:
- By default, public spaces and discussions force conversation and progress to happen at the lowest common denominator.
- This results in a default of high-effort projects happening in private, where it is harder for others to learn from.
- The people doing high-effort projects have lots of internal context, which is hard to communicate and get people up to speed on in a public setting. But internally, they can talk easily about it. So that ends up being what they do by default.
- Longterm, this kills the engine by which intellectual growth happens. It's what killed old LessWrong – all the interesting projects were happening in private, (usually in-person) spaces, and that meant that:
- newcomers couldn't latch onto them and learn about them incidentally
- at least some important concepts didn't enter the intellectual commons, where they could actually be critiqued or built upon
The solution was a world of spaces that were public, but with barriers to entry, and/or the ability to kick people out. So people could easily have the high-context conversations that they wanted, but newcomers could slowly orient around those conversations, and others could either critique those ideas in their own posts, or build off them.
Since last year, very few of my hopes have materialized.
(I think LessWrong in general has done okay, but not great, and Public-Archipelago-esque things in particular have not happened, and there's continued to be interesting discussion in private areas that not everyone is privy to)
I think the only thing that came close is some discussion on AI Alignment topics, which benefited from being technical enough to automatically have a barrier to entry, and created a discussion shaped in such a way that it was harder to drag it into Overton Window Fights.
The core problem is that maintaining a high-context space requires a collection of skills that few people have, and even if they do, it requires effort to maintain.
The moderation tools we built last year still require: a lot of active effort on the part of the individual users, and that effort is kinda intrinsically aversive (telling people to go away is a hard skill and comes with social risks), and it also requires people to have lots of ideas that are interesting enough in the first place to build a high-context conversation around.
The current implementation requires all three of those skills in a single person.
There are a few alternate implementations that could work, but requires a fair amount of dev work, and meanwhile we have other projects that seem higher priority. Some examples:
- People have asked for subreddits for awhile. Before we build that, we want to make sure that they're designed such that good ideas are expected to "bubble up" to the top of LessWrong, rather than stay in nested filters forever.
- Opt in rather than opt out moderation (i.e. people might have a list of collaborators, and only collaborators can comment on their posts, rather than a banned list). This is basically what FB and Google Docs does.
- I had some vague ideas for "freelance moderators". We give authors with 2000 karma the ability to delete comments and ban users, but this is rarely used, because it requires someone who is both willing to moderator and who can write well. Splitting those into two separate roles could be useful.
I'm most optimistic about the second option.
I think subreddits are going to be a useful tool that I expect to build sooner or later, but they won't accomplish most-of-the-thing. Most of what I'm excited about are not subreddits by topic, but highly-context-driven conversations with some nuanced flavor that doesn't neatly map to the sort of topics that subreddits tend to have. Plus, subreddits still mean someone has to do the work of policing the border, which is the biggest pain point of the entire process.
If I were to try the second option and it still didn't result in the kinds of outcomes I'm looking for, I'd update away from Public Archipelago being a viable frame for intellectual discourse.
(I do think the second option still requires a bit of effort to get right – it's important that the process be seamless and easy and a salient option to people. And thus, it'll probably still be a while before I'd have the bandwidth to push for it)
As someone who was inspired by your post from a year ago, and who was thinking of contributing to LessWrong as a public archipelago, here are some things that stopped me from contributing much. Maybe other people have these things in common with me and why they wanted to but failed to contribute in the last year.
1. There is less interest in the rationality community for the things I would be interested in writing about on LessWrong, or the rationality community is actively disinterested in things I am interested in writing about. This demotivates me to post on LW. I am in private group chats and closed Facebook groups largely populated by members of the rationalist diaspora. These discussions don't take place on LessWrong, not only because there might be relatively few people who would participate in LessWrong, but because they're discussions of subjects the rationality community is seen as hostile, indifferent, or disinterested in, such as many branches of philosophy. This discourages these discussions on the public archipelago. I expect there is a lot of people who don't post on LessWrong because they share this kind of perception. It's possible to find people with whom to have private discussions, but having them be on a public archipelago on LW, it if was possible to satisfy people, would make it easier and better from my viewpoint.
2. One particular worry I and others have is that, as in mainstream culture more and more things become politicized, more and more types of conversations on LW would be discouraged as 'politically mindkilling.' I personally wouldn't know what to expect as what the norms are here, though I am not as worried as others because I don't see it as much of a loss for there to be fewer half-baked speculations on political subjects online. A fear that the list of subjects discouraged as being too overtly 'political' could endlessly grow is discouraging.
3. The number of people who are interested in the subjects I am interested in on LessWrong is too small to motivate me to write more. I haven't explored this as much, and I think I have been too lazy in not trying. Yet a decent quantity of feedback, of sufficiently engaging and deep quality, seems like to me what would motivate I know to participate more on LW. One possibility is getting people I find who are not currently part of the rationality community, or a typical LW user, to read my posts on LW, and build something new out of it. I think this is fine to talk about, and I really agree with the shift since LW2.0 to develop LW as its own thing, still working with but distinct and independent from MIRI and AI alignment, CFAR, and the rationality community. So cleaving new online spaces on LW, which maybe can be especially tailored due to how much control I have over my own posts as a user, is something I am still open to trying.