In our WEIRD culture, unilateral is probably better. But it also reinforces that culture, and I have my qualms with it. I think we're choosing rabbit in a game of stag. You're essentially advocating for rabbit (which may or may not be a good thing)
In a highly individualistic environment you can't work things out *as a community* because there aren't any proper coherent communities, and people aren't going to sync their highly asynchronous lives with yours.
In a highly collectivist environment you can work things out alone, but it's not as effective as moving in a coordinated fashion because you actually do have that strictly superior option available to you.
I believe the latter has more upside potential, was the default in our ascendral environment, and has the ability to resolve equilibria of defection. The former is more robust because it's resistant to entropic decay, scales beyond dunbar's number, and doesn't rely on good coordinators.
So I would say "unilateral or GTFO" is a bit too cynical. I'd say "be aware of which options (unilateral or coordinated) are available to you". In a low-trust corporate environment it's certainly unilateral. In a high-trust community it is probably coordinated, and let's keep it that way.
This does sound nice in theory, to organically align the incentives instead of exerting control by passing laws or using external punishment/reward systems, but in reality you end up dealing with a lot of chameleon leeches, those who mimic your TrustyCar startup with their own SureDrive startup that games the review system, scams the buyers and then disappears. After a short time it will be impossible to tell who is the honest one, since every player is incentivised to signal their honesty, and so no one can be trusted. Eliezer talked about it in Inadequate Equilibria. Still, the strategy of aligning the incentives and reducing control is definitely worth keeping in mind, and it is important to consciously budget for any deviations from it.
Chameleon leeches are a small problem - consumers routinely pay attention to size and longevity of source for durable goods like cars. It may be difficult to initially gain the trust, but if this actually works it'll go far. The bigger problem is that you're taking on liability for something that YOUR vendors don't stand behind. You're buying used cars at auction based on whatever minimal inspection you get, and selling them with a deeper warranty than any existing seller offers.
But those object-level failures are actually SUCCESSES of the main point: Unilateral or GTFO!
The way to discover if something is workable is NOT to implement it by force of law, but by just trying it with resources you control. If it doesn't work, you've learned a valuable lesson about your beliefs. If it does work, you've been personally successful and have a solid base to start thinking about how to scale your insight to the rest of humanity.
Skin in the game, liability for failure, recognition of risks - all are terms for what's missing in the vast majority of social media discussions (including LessWrong) about how to fix an apparent current failing of societal equilibria.
Context: The first post gave a long list of examples of just-let-go design: a problem-solving approach/aesthetic based on giving up direct control over the system. The second post talked about giving up control as a visible signal of understanding. In this post, we get to main advantage of just-let-go design.
Suppose I’m lying in bed one day thinking about the problem of dishonest car-sellers. How can I get car-sellers to be honest about problems with their car?
I know! We need to pass a law which makes it a criminal act to lie about a car one is selling, so dishonest car-sellers get jail time.
Note the subtle shift from “I” to “we”. Even setting aside the likely ineffectiveness of such a law, passing laws is not within the space of things “I” can do over a weekend. “Laws we should pass” is great for facebook-filler, but not so great for practical ideas which I could personally implement.
What if we come at the problem from a minimum-control angle? We want to prevent car-seller dishonesty, while exerting as little control as possible.
Well, how about we disincentivize the seller from lying? That’s easy, we just need a contract which gives the seller some kind of liability for problems. This isn’t a complete solution yet, the details of that liability and its enforcement matter, but that’s tractable. The next question is implementation: having drafted such a contract, how do I get people to use it?
That’s a much easier problem than passing a law.
Just off the top of my head, I could create a startup called TrustyCar through which people buy and sell cars, and the main selling point is that the seller has some kind of liability for problems. Trustworthy sellers can obtain higher prices for their car by selling through TrustyCar, and buyers can obtain cars which they know are reliable. People have an incentive to start using it; nobody needs to force them. Indeed, I could charge them to use TrustyCar; their incentives still line up even if I collect a small fee.
In short: having found a minimum-control solution, I can implement it unilaterally. Passing laws is a thing “we” do, but brokering car sales is a thing “I” can do.
That’s a natural property of minimum-control solutions to problems, especially in the economic arena. The whole point is that we want a solution which doesn’t require controlling anyone else - therefore we can implement it ourselves. Who “we” is will depend on the context, on who I’m solving the problem for - it could be me personally, a team, a company - but whoever “we” are, we should be able to implement a minimum-control solution unilaterally.
As with the car example, the difference between unilateral and non-unilateral is the difference between things which could plausibly happen if I make an effort, and things which probably won’t happen any time soon. I use this as an heuristic at work: if a project requires buy-in from someone not in the room, then add at least one week to the timeline (and a full month if the missing person has a project queue, as is the case for most software engineers). A project which cannot be executed unilaterally by a small group will not happen soon, if it happens at all. Unilateral or GTFO.