I see LessWrong is currently obsessed with AI Alignment. I spoke with some others on the unofficial LessWrong discord, and agreed that LessWrong is becoming more and more specialised, thus scaring off any newcomers who aren't interested in AI.
That aside. I'm genuinely curious. Do any of the posts on LessWrong make any difference in the general psychosphere of AI alignment? Does anyone who has actual control on the direction of AI and LLM's follow LessWrong? Does Sam Altman or anyone at OpenAI engage with LessWrongers?
Not being condescending here. I'm just asking this since there's two (2) important things to note: (1) Since LessWrong has very little focus on anything other than AI at the moment, are these efforts meaningful? (2) What are some basic beginner resources someone can use to understand the flood of complex AI posts currently on the front page? (Maybe I'm being ignorant, but I haven't found a sequence dedicated to AI...yet.)
To add onto other people's answers:
People have disagreements over what the key ideas about AI/alignment even are.
People with different basic-intuitions notoriously remain unconvinced by each other's arguments, analogies, and even (the significance of) experiments. This has not been solved yet.
Alignment researchers usually spend most time on their preferred vein of research, rather than trying to convince others.
To (try to) fix this, the community's added concepts like "inferential distance" and "cruxes" to our vocabulary. These should be be discussed and used explicitly.
One researcher has some shortform notes (here and here) on how hard it is to communicate about AI alignment. I myself wrote some longer, more emotionally-charged notes on why we'd expect this.
But there's hope yet! This chart format makes it easier to communicate beliefs on key AI questions. And better ideas can always be lurking around the corner...
Seems to usually be good faith. People can still be biased of course (and they can't all be right on the same questions, with the current disagreements), but it really is down to differing intuitions, which background-knowledge posts have been read by which people, etc.