Epistemic Status: My best guess. I don't know if this will work but it seems like the obvious experiment to try more of.
Epistemic Effort: Spent several months thinking casually, 25ish minutes consolidating earlier memories and concerns, and maybe 10ish minutes thinking about potential predictions. See comment.
Building off:
- Open Problems in Group Rationality [Conor Moreton]
- Archipelago and Atomic Commutarianism [Scott Alexander]
Claim 1 - If you are dissatisfied with the norms/standards in a vaguely defined community, a good first step is to refactor that community into sub-groups with clearly defined goals and leadership.
Claim 2 - People have different goals, and you may be wrong about what norms are important even given a certain goal. So, also consider proactively cooperating with other people forming alternate subgroups out of the same parent group, with the goal of learning from each other.
Refactoring Into Subcommunities
Building groups that accomplish anything is hard. Building groups that prioritize independent thinking to solve novel problems is harder. But when faced with a hard problem, a useful technique is to refactor it into something simpler.
In "Open Problems in Group Rationality", Conor lists several common tensions. I include them here for reference (although any combination of difficult group rationality problems would suffice to motivate this post).
- Buy-in and retention.
- Defection and discontent.
- Safety versus standards.
- Productivity versus relevance.
- Sovereignty versus cooperation.
- Moloch and the problem of distributed moral action.
These problems don't go away when you have clearly defined goals. A corporation with a clearcut mission and strategy (i.e maximize profit by selling widgets) still has to navigate the balance of "hold their employees to a high standards to increase performance" and "make sure employees feel safe enough to do good work without getting wracked with anxiety" (or, just quit).
Such a corporation might make different tradeoffs in different situations - if there's a labor surplus, they might be less worried about employees quitting because they can just find more. If the job involves creative knowledge work, anxiety might have greater costs to productivity. Or maybe they're not just profit-maximizing: maybe the CEO cares about employee mental health for its own sake.
But well defined goals, with leaders who can enforce them, at least makes it possible to figure out what tradeoffs to make and actually make them.
Whereas if you live in a loosely defined community where people show up and leave whenever they want, and nobody can even precisely agree on what the community is, you'll have a lot more trouble.
People who care a lot about, say, personal sovereighty, will constantly push for norms that maximize freedom. People that care about cooperation will push for norms encouraging everyone to work harder and be more reliabl at personal freedom's expense.
Maybe one group can win - possibly by persuading everyone they are right, or simply by being more numerous.
But,
A) You probably can't win every cultural battle.
B) Even if you could, you'd spend a lot of time and energy fighting that might be better spent actually accomplishing whatever these norms are actually for.
So if you can manage to avoid infighting while still accomplishing your goals, all things being equal that's preferable.
Considering Archipelago
Once this thought occured to me, I was immediately reminded of Scott Alexander's Archipelago concept. A quick recap:
Imagine a bunch of factions fighting for political control over a country. They've agreed upon the strict principle of harm (no physically hurting or stealing from each other). But they still disagree on things like "does pornography harm people", "do cigarette ads harm people", "does homosexuality harm the institution of marriage which in turn harms people?", "does soda harm people", etc.
And this is bad not just because everyone wastes all this time fighting over norms, but because the nature of their disagreement incentivizes them to fight over what harm even is.
And this in turn incentivizes them to fight over both definitions of words (distracting and time-wasting) and what counts as evidence or good reasoning through a politically motivated lens. (Which makes it harder to ever use evidence and reasoning to resolve issues, even uncontroversial ones)
Then...

Imagine someone discovers an archipelago of empty islands. And instead of continuing to fight, the people who want to live in Sciencetopia go off to found an island-state based on ideal scientific processes, and the people who want to live in Libertopia go off and found a society based on the strict principle of harm, and the people who want to live in Christiantopia go found a fundamentalist Christian commune.
They agree on an overarching set of rules, paying some taxes to a central authority that handles things like "dumping pollutants into the oceans/air that would affect other islands" and "making sure children are well educated enough to have the opportunity to understand why they might consider moving to other islands."
Practical Applications
There's a bunch of reasons the Archipelago concept doesn't work as well in practice. There are no magical empty islands we can just take over. Leaving a place if you're unhappy is harder than it sounds. Resolving the "think of the children" issue will be very contentious.
But, we don't need perfect-idealized-archipelago to make use of the general concept. We don't even need a broad critical mass of change.
You, personally, could just do something with it, right now.
If you have an event you're running, or an online space that you control, or an organization you run, you can set the norms. Rather than opting-by-default into the generic average norms of your peers, you can say "This is a space specifically for X. If you want to participate, you will need to hold yourself to Y particular standard."
Some features and considerations:
You Can Test More Interesting Ideas. If a hundred people have to agree on something, you'll only get to try things that you can can 50+ people on board with (due to crowd inertia, regardless of whether you have a formal democracy)
But maybe you can get 10 people to try a more extreme experiment. (And if you share knowledge, both about experiments that work and ones that don't, you can build the overall body of community-knowledge in your social world)
I would rather have a world where 100 people try 10 different experiments, even if I disagree with most of those experiments and wouldn't want to participate myself.
You Can Simplify the Problem and Isolate Experimental Variables. "Good" science tests a single variable at the time so you can learn more about what-causes-what.
In practice, if you're building an organization, you may not have time to do "proper science" - you may need to get a group working ASAP, and you may need to test a few ideas at once to have a chance at success.
But, all things being equal it's still convenient to isolate factors as much as possible. One benefit to refactoring a community into smaller pieces is you can pick more specific goals. Instead of reinventing every single wheel at once, pick a few specific axes you're trying to learn about.
This will both make the problem easier, as well as make it easier to learn from.
You Can 'Timeshare Islands'. Maybe you don't have an entire space that you can control. But maybe you and some other friends have a shared space. (Say, a weekly meetup).
Instead of having the meetup be a generic thing catering to the average common denominator of members, you can collectively agree to use it for experiments (at least sometimes). Make it easier for one person to say 'Okay, this week I'd like to run an activity that'll require different norms than we're used to. Please come prepared for things to be a bit different.'
This comes with some complications - one of the benefits of a recurring event is people roughly know what to expect, so it may not be good to do this all the time. But generally, giving the person running a given event the authority to try some different norms out can get you some of the benefits of the Archipelago concept.
You Can Start With Just One Meetup
Viliam in the comments made a note I wanted to include here:
It is important to notice that the "island" doesn't have to be fully built from start. "Let's start a new subgroup" sounds scary; too much responsibility and possibly not enough status. "Let's have one meeting where we try the norm X and see how it works" sounds much easier; and if it works, people would be more willing to have another meeting like that, possibly leading to the creation of a new community.
Making It Through the 'Unpleasant Valley' of Group Experimentation.
I think this graph was underappreciated in its original post. When people try new things (a new diet or exercise program, studying a new skill, etc), the new thing involves effort and challenges that in some ways make it seem worse than whatever their default behavior was.

Some experiments are just duds. But oftentimes it feels like it'll turn out to be a dud, when you're in the Unpleasant Valley, and in fact you just haven't stuck with it long enough for it to bear fruit.
This is hard enough for solo experiments. For group experiments, where not just one but many people must all try a thing at once and get good at it, all it takes is a little defection to spiral into a mass exodus.
Refactoring communities into smaller groups with clear subgoals can make it possible for a group to make it through the Valley of Unpleasantness together.
Overlapping Social Spheres
Sharing Islands and Cross Pollination
In the end, I don't think "Islands" is quite the right metaphor here. One of the things that makes social archipelago different from the canonical example is that the islands overlap. People may be a member of multiple groups and sub-groups.
A benefit of this is cross pollination - it's easier to share information and grow if you have people who exist in multiple subcultures (sub-subcultures?) and can translate ideas between them.
How much benefit this yields depends on how mindfully people are approaching the concept, and how much of their ideas they are sharing (making both the object-level-idea and the underlying reasons accessible to others).
This post is primarily intended as reference - I have more specific ideas on what kinds of communities I want to participate in, and thoughts on "underexplored social niches" that I think others might consider experimenting with. Some of those thoughts will be on the LessWrong front page, others on my private profile or the Meta section.
But meanwhile, I hope to see more groups of people in my filter bubble self organizing, carving out spaces to try novel concepts.
Thoughts about additional ramifications of this (not optimized much for readability).
Background on Epistemic Effort:
I'm of the belief that if I'm proposing a major idea that I'm hoping people will take action on, I should think seriously about the idea for... N minutes. N varies. But the key is to look into the dark, accounting for positive bias: What the ways an idea might not succeed? What consequences might it have that didn't fit as prettily into the narrative I was crafting?
In my personal experience, this takes something like 30 minutes at least. In my original Epistemic Effort post I suggested 5 minutes, but I've found 15 minutes is barely enough to finish searching through existing thoughts already in the back of my mind. 30 Minutes is how long it takes to get started trying to think multiple steps into the future and general novel concerns.
This process is somehow very different from the process that generates my original blogpost.
I've noticed that now that I know it takes at least 30 minutes, I'm a lot more hesitant to even try to take 5. (I almost went ahead and posted this post without doing so, and then flagging "didn't think for 5 minutes about how it might fail", and then that felt embarrassing and it seemed important enough to do so that I went ahead and did it. But it might bode poorly for the idea)
"What About the Babyeaters?"
A failure mode of canonical Archipelago-ism is altruism + "think of the children." If The Other Group is focused on something actively harmful, and you don't trust people to be able to leave, or you think that the harms take root in childhood before people even have a chance to choose a civilization for themselves (say, secondhand smoke in the privacy of people's own homes, or enforcing strict gender norms from early childhood...
...then the "to each their own" concept advocated here doesn't sound as persuasive.
This is a different issue with Social Subculture Archipelagism. Even if there are no literal children, you may worry about pernicious, harmful ideas taking root that you expect to be memetically successful even though they are dangerous.
It's very conceivable to me that the outcome of a "successful" Archipelagism taking root would be Bad Ideas Winning and the whole thing to end up net negative.
My current take is that some kind of "Bad Ideas Being Successful" thing is likely to happen, but that the overall Net Harm/Good will be positive. I don't really have a justification for that. Just a feeling.
Observations of what's happened so far....
Competing Norms
During the Hufflepuff Unconference, I ran into issues of how norms collide. I wanted people to either firmly commit to coming, or to not come. This failed in a few ways:
This last issue eventually resulted in my changing the rules to "please respond with an explicit estimate of how likely you were to come, and some of the decision-relevant things that might affect whether you come or not." I think this worked better.
I don't have evidence that this is that big a problem (I tried one experiment, it didn't work as well as I wanted, I came up with a solution. System working as intended). But it implies future issues that one might not foresee
It's Hard To Make Spaces
I have attempted to create a few spaces (at different times in different cities). And in general, it's harder to create a new space dedicated to a particular thing than I'd have thought (in particular, finding enough people who care about a thing to seriously try out novel norms). In New York, it was hard because there weren't that many people. In the Bay Area, it's been harder-than-I-expected because although there ARE enough people that I expect to flesh out subcultures, those people have more things competing for their attention.
I (currently) expect to be able make things happen, but it won't be as easy as hanging out a shingle. 45 people came to the Hufflepuff Unconference, but I spent 2 months and several blogposts hyping that up. (More recently, I tried to get an Epistemic Unconference happening that'd have a different set of norms, and I couldn't get a critical mass of people interested. I didn't try very hard - it's Solstice Season and I need to conserve my "hey everyone let's all do an effortful thing!" energy for that. But this clarified the degree of difficulty I'd have attracting interest in things)
I expect to have an easier-than-average time getting people interested in things, and it to still require a couple enthusiasm-driving-blogposts per individual thing.
So with that in mind...
Predictions and Gears-Thinking
(IMO this is the hard part of a "think seriously for 30 minutes" thing. This will be most stream-of-conscious-y of the sections)
First, I guess my most obvious prediction is "doing this at all is harder than I was hoping, and barely anything happens."
Futher predictions are sort of weird, since the act of saying some of them out loud might make them come true. (It occurs to me I could secretly make predictions and see if anything happens in a year. I may do that but am not doing it yet)
I notice that the default way my brain is attempting to generate thoughts here feels like the Social Modesty Module running.
The second thing my brain's doing is listing the things I hope will happen, and then see how my internal-surprise-o-meter feels about it.
The third thing, that I will actually record here, is listing things that seem like they might happen, that I want to happen or am afraid might happen, and not list my particular predictions yet but at least get the predictable-in-theory-ideas out there:
How many people will actually attempt to start a subgroup or change norms at one they already control as a result of this blogpost?
How many people will end up involved with those subgroups?
How many groups will happen secretly or privately? How many public?
How many will try experiments past the Valley of Discomfort?
In a year, and in 5 years, how many people will feel that those subgroups were useful?
How many novel social norms will be developed?
How many times do I expect that I'll be surprised by something that happens as a result of this blogpost?
How many times do I expect that I'll be confused by something that happens as a result of this blogpost?
How many attempted social norms will clash in actively bad ways?
Will I end up regretting this blogpost (separate questions for "will I think it turned out not to work but was still the right thing to push for at the time", and "will I think, in principle, that I should have spent my time and social capital doing something else?"
Will people end up more socially isolated, less, or, neutral, as a result of this class of experiment?
(Huh, result of this: "generate hypothesis you can test without stressing about actually deciding on your predictions" was a suprisingly useful technique - I notice with several of the above that I have a least some intuitive sense of how it will play out, and in others, I notice I expect things to fail by default, but that I immediately see ways to make them less failure prone, if I chose to spend the time doing so)