One of my favourite aspects of Less Wrong is that we've created a community where it is easier for us to communicate with each other because we have a shared understanding of certain concepts. A large amount of highly quality content has been posted to the site since it started, so I think it would be a useful exercise to reflect on which ideas should become part of this common knowledge for this edition of Monthly Meta.

I'd suggest that this should work as follows: each post should contain one concept, with a short definition and an explanation of why you think it is important. Please don't submit your own ideas. I'll go first.

New Comment
14 comments, sorted by Click to highlight new comments since:

T.C. Chamberlin's "Method of Multiple Working Hypotheses", as discussed by Abram here, is pretty much a summary of LW epistemic rationality. The idea is that you should look at your data, your hypothesis, and the next best hypothesis that fits the data. Some applications:

Wason 2-4-6 task: if you receive information that 1-2-3 is okay and 2-4-6 is okay while 3-2-1 isn't, and your hypothesis is that increasing arithmetic progressions are okay, the next best hypothesis for the same data is that all increasing sequences are okay. That suggests the next experiment to try.

Hermione and Harry with the soda: if the soda vanishes when spilled on the robes, and your hypothesis is that the robes are magical, the next best hypothesis is that the soda is magical. That suggests the next experiment to try.

Einstein's arrogance: if you have a hypothesis and you've tried many next best hypotheses on the same data, you can be arrogant before seeing new data.

Witch trials: if the witch is scared of your questioning, and your hypothesis is that she's scared because she's guilty, the next best hypothesis is that she's scared of being killed. If your data doesn't favor one over the other, you have no business thinking about such things.

Mysterious answers: if you don't know anything about science, and your hypothesis is that sugar is sweet because its molecule is triangular, the next best hypothesis is that the molecule is square shaped. If your data doesn't favor one over the other, you have no business thinking about such things.

Religion: if you don't see any miracles, and your hypothesis is that God is hiding, the next best hypothesis is that God doesn't exist.

And so on. It's interesting how many ideas this covers.

Thanks for listing some of the applications! Maybe one of us will get around to a proper post just on this.

The closest thing I have to an outline on a post about this is here, but other parts of my rationality notes tagged with that "recurring theme: come up with alternatives" phrase are likely relevant.

Slack: Slack is one of the concepts that seems to have gained the most traction. Unfortunately, I don't think we have a clear definition yet. Zvi defined Slack as the absence of binding constraints on behaviour. I suggested that this would make it merely a synonym for freedom and that his article seems to describe, "freedom provided by having spare resources", but Zvi wasn't happy with it and I don't know if he's settled on a precise definition yet.

I suspect that this idea is so popular because it gives people a word to describe the disadvantages of running a system near maximum capacity. Zvi explains how slack reduces stress, allows you to pursuing opportunities that arise, allows you to avoid bad trade-offs and allows long-term thinking. In another key article, Raemon explains how it can lead to fragile design decisions and making poor decisions that go unnoticed.

Yup, I was thinking about the best posts so far and Slack was the one that stuck out the most in mind, both as a memorable concept and as something that's actually enabled a large number of conversations that wouldn't have otherwise been enabled.

I recently reread Zvi's Out to Get You, which I think might actually be more important than Slack (although they're fairly intertwined).

Whereas Slack communicates a concept that opens up discussion and primes you to think about a dimension of your life you might not have been thinking about, Out to Get You delves in detail into a particular class of think that will eat your Slack if you don't stop it:

Some things are fundamentally Out to Get You.
They seek resources at your expense. Fees are hidden. Extra options are foisted upon you. Things are made intentionally worse, forcing you to pay to make it less worse. Least bad deals require careful search. Experiences are not as advertised. What you want is buried underneath stuff you don’t want. Everything is data to sell you something, rather than an opportunity to help you.
When you deal with Out to Get You, you know it in your gut. Your brain cannot relax. You lookout for tricks and traps. Everything is a scheme.

It isn't a synonym for money. Lack of money doesn't lead to burnout which was one example in the linked article.

Lack of money does indeed lead to burn out. I hope you have not had the experience, but it is the way it is.

This is a great initiative, too bad there aren't more answers!

As for my contribution, two mythology-inspired personifications:

Moloch: A personification of the system, or rather of the mechanisms that perpetuate the system, even though nobody likes it. It's the avatar of the tragedy of the commons and the non-iterated prisoner dilemma.

Ra: A personification of the malign establishment, of the attitude of idealizing vagueness and despising clarity.

The Establishment is primarily an upper-class phenomenon, that it is more about social and moral legitimacy than mere wealth or raw power, and that it is boringly evil — it produces respectable, normal, right-thinking, mild-mannered people who do things with very bad consequences.
The worship of Ra involves a preference for stockpiling money, accolades, awards, or other resources, beyond what you can meaningfully consume or make practical use of; a felt sense of wanting to attain that abstract radiance of “bestness”.

Feel free to add to those definition if you think a crucial aspect is missing!

I'm actually not a fan of the term "Moloch". As far as I can tell, it seems to just be unnecessary jargon for the Race to the Bottom.

I still don't have a clear understanding about what is meant exactly by Ra. The article focuses mainly on institutions, but some of the examples seem to indicate that she thinks of it as extending outside that scope. It seems to be something along the lines of the worship of vague and unspecified goals, as though they were a specific pathway to ultimate desired success?

The important bit with Moloch is that humans have a bias to care a lot more about problems caused by malevolent agents (i.e. terrorism) than problems caused by game theory and coordination failure. Moloch is an attempt to give a name and face to an important enemy such that people take it the appropriate amount of seriously.

My current sense of Ra is something like goodharting / cargo culting on indicators of prestige and respectability.

I actually had my own term before I encountered Ra, which was "business creepy" (definitely should have been "business cheesy", but creepy stuck). It was about all the empty words and discourses, the business-looking stock photographs , anytime anyone mentioned the term "excellence". This was bullshit paramount for me, and yet very few people ever thought anything about it.

As I got professional experience, the term grew to encompass completely unjustified cheerleading beliefs of the "we're the best and the other suck" variety.

If you try to challenge these narratives of excellence or false superiority, even using due diplomacy, you get the reaction that Sarah describe: confusion, incoherence, then anger.

For me these are beliefs held by individuals, but promoted by the institution, and they're tribal in nature. To challenge them is to challenge a shared identity. A bit like "company culture".

Vagueness and unspecificity are the only way these beliefs can possibly stay in place.

There's a slew of perverse incentives against removing these beliefs. If you start counting what really matters, you might come to the conclusion that you don't matter. Drastic improvement means you were doing things wrong all along, again a threat to identity and a sense of self-worth.

Bureaucracy is peak Ra for me. You've got completely screwed up beliefs, cult-like rituals that are their own justification. Anyone that (quite rightly) criticizes them is a hated heathen which must be ostracized for lack of respect to Ra's priests. There is also a prevalence of premade opaque formulations (jargon) that makes your interaction with the bureaucracy even harder, yet employees treat these formulations as somewhat sacred.

Hmm, I still don't completely understand. Is it the tendency of organisations to develop an ideological attachment to achieving vague goals that have become the purpose of the organisation such that they can no longer be questioned without this being seen as an attack on the organisation or, at best a flaw in your understanding?

So if I'm imagining how this could come about, the person or group of people who found an organisation (or who are otherwise early leadership before the culture crystallises), have certain opinions about how it should operate. These founding figures have reasons why they believe these principles to be important, but over time these principles are detached from these reasons and become free floating principles, just like traditions in broader society. Obviously, there are reasons given as to why these principles are important, but only in the same way that religion has apologetics. This is where vagueness helps. It is much easier to defend values like diversity, innovation or customer focus in the abstract, than any specific implementation or policy prescription that comes out of it.

Since the culture is now entrenched, those who dislike it tend to leave or even not apply in the first place, as opposed to the earlier stages when it might have been possible to change the mind of the founders. Any change to the values could disrupt entrenched interests, such as managers who want to keep their projects going or departments that want to maintain headcount. Further, individuals have invested time and effort in being good at talking the corporate language. Attempting to clarify any of the vagueness would be incredibly disruptive. So the stability of vagueness forms a Schelling point for the most established factions.

Further, the vagueness provides individual departments or groups more freedom to make themselves look good than if the goal was more locked down. For example, it is much easier to demonstrate progress on diversity or show off projects related to innovation, than to demonstrate progress along a specific axis.

Anyway, just using this comment to "think aloud", as I'm still somewhat uncertain about this term.

To answer your first question: I'm not sure. First, I'm not sure goals are necessarily the focal point, although losing sights of the real goals is certainly a potent symptom of Ra.

I haven't thought much about how these beliefs arise, but I haven't felt very compelled to seek for a complicated explanation besides the usual biases. Saying we're the best will usually be met with approval. And the more "reasonable doubt" that this might not be the case, the more it becomes necessary to affirm this truth not to break the narrative.

You seem to imagine pure founders and then a degradation of values, but very often the founders are not immune to this problem, or are in fact its very cause. Even when the goals are pretty clear - such as in a startup, where it could be to break even at first - people adopt false tribal beliefs. The bullshit can manifest itself in many places: what they say of their company culture, hiring practices, etc.

That being said, I'm not sure my own understanding of the matter matches that of the original author. But it certainly immediately pattern-matched to something I found to be a very salient characteristic of many organizations.