LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Fwiw I didn't find the post hostile.
I'm assuming "natural abstraction" is also a scalar property. Reading this paragraph, I refactored the concept in my mind to "some abstractions tend to be cheaper to abstract than others. agents will converge to using cheaper abstractions. Many cheapness properties generalize reasonably well across agents/observation-systems/environments, but, all of those could in theory come apart."
And the Strong NAH would be "cheap-to-abstract-ness will be very punctuated, or something" (i.e. you might expect less of a smooth gradient of cheapnesses across abstractions)
How would you solve the example legal situation you gave?
Thanks, this gave me the context I needed.
Put another way: this post seems like it’s arguing with someone but I’m not sure who.
I think I care a bunch about the subject matter of this post, but something about the way this post is written leaves me feeling confused and ungrounded.
Before reading this post, my background beliefs were:
Given all that... is there anything in-particular I am meant to take from this post? (I have right now only skimmed it, it felt effortful to comb for the novel bits). I can't tell whether the few concrete bits are particularly important, or just illustrative examples.
This is not very practically useful to me but dayumn it is cool
An individual Social Psychology lab (or lose collection of labs) can choose who to let in.
Frontier Lab AI companies can decide who to hire, and what sort of standards they want internally (and maybe, in a lose alliance with other Frontier Lab companies).
The Immoral Mazes outlines some reasons that you might think large institutions are dramatically worse than smaller ones (see: Recursive Middle Manager Hell for a shorter intro, although I don't spell out the part argument about how mazes are sort of "contagious" between large institutions)
But the simpler argument is "the fewer people you have, the easier it is for a few leaders to basically make personal choices based on their goals and values," rather than selection effects resulting in the largest institutions being better modeled as "following incentives" rather than "pursuing goals on purpose." (If an organization didn't follow the incentives, they'd be outcompeted by one that does)
This claim looks like it's implying that research communities can build better-than-median selection pressures but, can they? And if so why have we hypothesized that scientific fields don't?
I'm a bit surprised this is the crux for you. Smaller communities have a lot more control over their gatekeeping because, like, they control it themselves, whereas the larger field's gatekeeping is determined via openended incentives in the broader world that thousands (maybe millions?) of people have influence over. (There's also things you could do in addition to gatekeeping. See Selective, Corrective, Structural: Three Ways of Making Social Systems Work)
(This doesn't mean smaller research communities automatically have good gatekeeping or other mechanisms, but it doesn't feel like a very confusing or mysterious problem on how to do better)
You can make a post or shortform discussing it and see what people think. I recommend front loading the main arguments, evidence or takeaways so people can easily get a sense of it - people often bounce off long worldview posts from newcomers