I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
I have a hard time imagining someone writing this without subtweeting. Feels like classic subtweeting to me, especially "I think this is pretty obvious". Like, it's a trivially true point, all the debate is in the applicability/relevance to the situation. I don't see any point in it except the classic subterfuge of lowering the status of something in a way that's hard for the thing to defend itself against.
My standard refrain is that open aggression is better than passive aggression. The latter makes it hard to trust things / intentions, and makes people more paranoid and think that people are semi-covertly coordinating to lower their status around them all the time. For instance, and to be clear this is not the current state, but it would not be good for the health of LW for people to regularly see people discussing "obvious" points in shortform and ranting about people not getting them, and later find out it was a criticism of them about a post that they didn't think would be subject to that criticism!
There is a strong force in web forums to slide toward news and inside-baseball; the primary goal here is to fight against that. It is a bad filter for new users if a lot of that they see on first visiting the LessWrong homepage is discussions of news, recent politics, and the epistemic standards of LessWrong. Many good users are not attracted by these, and for those not put off it's bad culture to set this as the default topic of discussion.
(Forgive me if I'm explaining what is already known, I'm posting in case people hadn't heard this explanation before; we talked about it a lot when designing the frontpage distinction in 2017/8.)
I think most of the people involved like working with the smartest and most competent people alive today, on the hardest problems, in order to build a new general intelligence for the first time since the dawn of humanity, in exchange for massive amounts of money, prestige, fame, and power. This is what I refer to by 'glory'.
I think of it as 'glory'.
Perhaps a react for "I wish this idea/sentence/comment was a post" would improve things.
I felt confused at first when you said that this framing is leaning into polarization. I thought "I don't see any clear red-tribe / blue-tribe affiliations here."
Then I remembered that polarization doesn't mean tying this issue to existing big coalitions (a la Hanson's Policy Tug-O-War), but simply that it is causing people to factionalize and create an conflict and divide between them.
I think it seems to me like Max has correctly pointed out a significant crux about policy preferences between people who care about AI existential risk, and it also seems to me worth polling people and finding out who thinks what.
It does seem to me that the post is attempting to cause some factionalization here. I am interested in hearing about whether this is a good or bad faction to exist (relative to other divides) rather than simply saying that division is costly (which it is). I am interested in some argument about whether this is worth it / this faction is a real one.
Or perhaps you/others think it should ~never be actively pushed for in the way Max does in this post (or perhaps not in this way on a place with high standards for discourse like LW).
That's right. One exception: sometimes I upvote posts/comments written to low standards in order to reward the discussion happening at all. As an example I initially upvoted Gary Marcus's first LW post in order to be welcoming to him participating in the dialogue, even though I think the post is very low quality for LW.
(150+ karma is high enough and I've since removed the vote. Or some chance I am misremembering and I never upvoted because it was already doing well, in which case this serves as a hypothetical that I endorse.)
The effect seems natural and hard to prevent. Basically, certain authors get reputations for being high (quality * writing), and then it makes more sense for people to read their posts because both the floor and ceiling are higher in expectation. Then their worse posts get more readers (who vote) than posts of a similar quality by another author, who's floor and ceiling is probably lower.
I'm not sure the magnitude of the cost, or that one can realistically expect to ever prevent this effect. For instance, ~all Scott Alexander blogposts get more readership than the best post by many other authors who haven't built a reputation and readership, and this kind of just seems part of how the reading landscape works.
Of course, it can be frustrating as an author to sometimes see similar quality posts on LW get different karma. I think part of the answer here is to do more to celebrate the best posts by new authors. The main thing that comes to mind here is curation, where we celebrate and get more readership on the best posts. Perhaps I should also have a term here for "and this is a new author, so I want to bias toward curating them for the first time so that they're more invested in writing more good content".
Not sure if you're intending to disagree, but I do sometimes have like a post-list or the quick-takes fail to load, with a red error message instead, and then if I refresh it goes away.
(I can't recall it happening very often, mostly I see it when I run a dev instance.)