I also love The Inner Ring and basically endorse your take on it. One confusion/thought I had about it was: How do we distinguish between Inner Rings and Groups of Sound Craftsmen? Both are implicit/informal groups of people who recognize and respect each other. Is the difference simply that Sound Craftsmen's mutual respect is based on correct judgments of competence, whereas Inner Rings are based on incorrect judgments of competence? That seems reasonable, but it makes it very hard in some cases to tell whether the group you are gradually becoming part of -- and which you are excited to be part of -- is an Inner Ring or a GoSC. (Because, sure, you think these people are competent, but of course you'd probably also think that if they were an Inner Ring because you are so starstruck.) And also it means there's a smooth spectrum of which Inner Rings and GoSCs are ends; a GoSC is where everyone is totally correct in their judgments of competence and an Inner Ring is where everyone is totally incorrect... but almost every group, realistically, will be somewhere in the middle.
How do we distinguish between Inner Rings and Groups of Sound Craftsmen?
The essay's answer to this is solid, and has steered me well:
In any wholesome group of people which holds together for a good purpose, the exclusions are in a sense accidental. Three or four people who are together for the sake of some piece of work exclude others because there is work only for so many or because the others can’t in fact do it. Your little musical group limits its numbers because the rooms they meet in are only so big. But your genuine Inner Ring exists for exclusion. There’d be no fun if there were no outsiders. The invisible line would have no meaning unless most people were on the wrong side of it. Exclusion is no accident; it is the essence.
My own experience supports this being the crucial difference. I've encountered a few groups where the exclusion is the main purpose of the group, *and* the exclusion is based on reasonably good judgments of competence. These groups strike me as pathological and corrupting in the way that Lewis describes. I've also encountered many groups where exclusion is only "accidental", and also the people are very bad at judging competence. These groups certainly have their problems, but they don't have the particular issues that Lewis describes.
I'm not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:
... you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.
To even "try out" for a Special Forces group like Delta Force or the Navy SEAL Teams, you have to be among the most dedicated, most physically fit, and most competent of soldier.
Then, the selection procedures are incredibly intense — only around 10% of those who attend selection actually make the cut.
This is, of course, exclusionary.
But then, seemingly paradoxically, these organizations run with far less hierarchy, formal authority, and traditional military decorum than the norm. They run... far more egalitarian than other traditional military unit. [...]
Going back [...] [If we search out the root causes of "perpetual bickering" within many well-meaning volunteer organizations] we can find a few right away —
*When there's low standards of trust among a team, people tend to advocate more strongly for their own preferences. There's less confidence on an individual level that one's own goals and preferences will be reached if not strongly advocated for.
*Ideas — especially new ideas — are notoriously difficult to evaluate. When there's been no objective standard of performance set and achieved by people who are working on strategy and doctrine, you don't know who has the ability to actually implement their ideas and see them through to conclusion.
*Generally at the idea phase, people are maximally excited and engaged. People are often unable to model themselves to know how they'll perform when the enthusiasm wears off.
*In the absence of previously demonstrated competence, people might want to show they're fit for a leadership role or key role in decisionmaking early, and might want to (perhaps subconsciously) demonstrate prowess at making good arguments, appearing smart and erudite, etc.
And of course, many more issues.
Once again, this is often resolved by hierarchy — X person is in charge. In the absence of everyone agreeing, we'll do what X says to do. Because it's better than the alternative.
But the tradeoffs of hierarchical organizations are well-known, and hierarchical leadership seems like a fit for some domains far moreso than others.
On the other end of the spectrum, it's easy when being egalitarian to not actually have decisions get made and fail to have valuable work getting done. For all the flaws of hierarchical leadership, it does tend to resolve the "perpetual bickering" problem.
From both personal experience and a pretty deep immersion into the history of successful organizations, it looks like often an answer is an incredibly high bar to joining followed by largely decentralized, collaborative, egalitarian decisionmaking.
Thanks, this is helpful!
EDIT: more thoughts:
I think the case of limiting meetings because the room is only so big is too easy. What about limiting membership because you want only the best researchers in your org? (Or what if it's a party or retreat for AI safety people -- OK to limit membership to only the best researchers?) There's a good reason for selecting based on competence, obviously. But now we are back to the problem I started with, which is that every Inner Circle probably presents itself (and thinks of itself) as excluding based on competence.
In Thinking Fast and Slow, Daniel Kahneman describes an adversarial collaboration between himself and expertise researcher Gary Klein. They were originally on opposite sides of the "how much can we trust the intuitions of confident experts" question, but eventually came to agree that expert intuitions can essentially be trusted if & only if the domain has good feedback loops. So I guess that's one possible heuristic for telling apart a group of sound craftsmen from a mutual admiration society?
Man, that's a very important bit of info which I had heard before but which it helps to be reminded of again. The implications for my own line of work are disturbing!
There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the "AGI is important and possible" part of the argument but not the "alignment is crazy difficult" part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn't crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a "why we don't need to worry about AI safety" essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
Create more cross talk between people working in AGI and people thinking about how to make it safe
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren't very good
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there's a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there's not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
Another point here is that from a cause prioritization perspective, there's a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there's not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree).
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they're not considering that hypothesis, that means they're not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that's not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn't say "they are not considering that hypothesis", I am saying "they don't want to consider that hypothesis". Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Yes, I didn't say "they are not considering that hypothesis", I am saying "they don't want to consider that hypothesis". Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don't want to consider the hypothesis, and that's why they'll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case... Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the "alignment is hard" position (since those expositions are how they came to work on AGI). But they don't think the "alignment is hard" position is correct -- it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That's interesting, but it doesn't seem that any of the arguments they've made have reached LW or the EA Forum -- let me know if I'm wrong. Anyway I think my original point basically stands -- from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes -- I've long maintained that people should be paid to argue for unincentivized positions.)
I do like the idea of sponsoring a prize for such an essay contest. I'd contribute to the prize pool and help with the judging!
Which is essential and which accidental , the competence or the group ?
A GoSC is a means to an end , the craftsmanship is the end.
When judging an outsider , an Inner Ring care more about group members then competency.
Most of the Inner Rings I've observed are primarily selected on (1) being able to skilfully violate the explicit local rules to get things done without degrading the structure the rules hold up and (2) being fun to be around, even for long periods and hard work.
Lewis acknowledges that Inner Rings aren't necessarily bad, and I think the above is a reason why.
Oh, and I also notice that a social manoeuvring game (the game that governs who is admitted) is a task where performance is correlated with performance on (1) and (2)
In modeling the behavior of the coolness-seekers, you put them in a less cool position.
It might be a good move in some contexts, but I feel resistant to taking on this picture, or recommending others take it on. It seems like making the same mistake. Focusing on the object level because you want to be [cool in that you focus on the object level], that does has the positive effect of focusing on the object level, but I think also can just as well have all the bad effects of trying to be in the Inner Ring. If there's something good about getting into the Inner Ring, it should be unpacked, IMO. On the face of it, it seems like mistakenly putting faith in there being an Inner Ring that has things under control / knows what's going on / is oriented to what matters. If there were such a group it would make sense to apprentice yourself to them, not try to trick your way in.
Exactly this. The whole point of the Inner Ring (which I did not read, but judging by the review and my knowledge of Lewis/Christian thought and virtue ethic) is that you should aim at the goods that are inherent to your trade or activity (i.e., if you are a coder, writing good code), and not care about social goods that are associated with the activity. Lewis then makes a second claim (which is really a different claim) that you will also reach social goods through sincerely pursuing the inherent goods of your activity.
You can write the best code in the world, but the Wikipedia page for "people who write the best code in the world" will only mention the members of the Inner Ring.
Unless you are of course so good that everyone knows you, in which case they will add you to that Wikipedia page. They will however not add the person who is the second best coder in the world. The list of "top five coders in the world" will include you, plus four Inner Ring members.
So the second claim is kinda yes, kinda no -- yes, you can reach the social goods exclusively through sincerely pursuing the inherent goods, but you must work twice as hard.
I'm torn about this one. On the one hand, it's basically a linkpost; Katja adds some useful commentary but it's not nearly as important/valuable as the quotes from Lewis IMO. On the other hand, the things Lewis said really need to be heard by most people at some point in their life, and especially by anyone interested in rationality, and Katja did LessWrong a service by noticing this & sharing it with the community. I tentatively recommend inclusion.
The comments have some good discussion too.
There is a distinction between joining a group for the sake of joining a group and acquiring status, and joining a group because it offers you companionship, friendly competition, and entertainment. The feeling of status and of being a high-ranking person is a good feeling, most people feel this way. I don't think the question is whether this feeling is good or bad, whether we should feel this way at all; it's a question of time. How much time will it take to acquire that status? Is there a better way you can invest your time? If joining an in-group gives you the highest return on your invested time, accounting for all the risks (like being spontaneously ejected), then go for it. It's up to each individual to decide which set of actions has the highest "emotional" return, that really depends on their unique personal history and genetics.
If I play a zero sum game and win, that’s good for me, and not bad for the world as a whole. I don’t care about what God or an ideal observer would think, since there is no such thing. One way in which Lewis’s values are dramatically different from those of an atheist.
The only question that matters to me is whether seeking to get into the inner ring will make me happy or not. I see Lewis says it would not make me happy, but I don’t find his reasons really convincing (they seem to be a priori rather than drawn from experience).
Competing in zero sum games rather than looking for positive sum games to play is not good for the world (and probably not good for you either on average, unless you have reason to think you will be better than average at this).
I enjoyed C.S.Lewis’ The Inner Ring, and recommend you read it. It basically claims that much of human effort is directed at being admitted to whatever the local in-group is, that this happens easily to people, and that it is a bad thing to be drawn in to.
Some quotes, though I also recommend reading the whole thing:
…
…
His main explicit reasons for advising against succumbing to this easy set of motives are that it runs a major risk of turning you into a scoundrel, and that it is fundamentally unsatisfying—once admitted to the ingroup, you will just want a further in group; the exclusive appeal of the ingroup won’t actually be appealing once you are comfortably in it; and the social pleasures of company in the set probably won’t satisfy, since those didn’t satisfy you on the outside.
I think there is further reason not to be drawn into such things:
I think Lewis is also making an interesting maneuver here, beyond communicating an idea. In modeling the behavior of the coolness-seekers, you put them in a less cool position. In the default framing, they are sophisticated and others are naive. But when the ‘naive’ are intentionally so because they see the whole situation for what it is, while the sophisticated followed their brute urges without stepping back, who is naive really?