I see LessWrong is currently obsessed with AI Alignment. I spoke with some others on the unofficial LessWrong discord, and agreed that LessWrong is becoming more and more specialised, thus scaring off any newcomers who aren't interested in AI.
That aside. I'm genuinely curious. Do any of the posts on LessWrong make any difference in the general psychosphere of AI alignment? Does anyone who has actual control on the direction of AI and LLM's follow LessWrong? Does Sam Altman or anyone at OpenAI engage with LessWrongers?
Not being condescending here. I'm just asking this since there's two (2) important things to note: (1) Since LessWrong has very little focus on anything other than AI at the moment, are these efforts meaningful? (2) What are some basic beginner resources someone can use to understand the flood of complex AI posts currently on the front page? (Maybe I'm being ignorant, but I haven't found a sequence dedicated to AI...yet.)
Just wanted to point out that AI Safety ("Friendliness" at the time) was the original impetus for LW. Only, they (esp. EY, early on) kept noticing other topics that were prerequisites for even having a useful conversation about AI, and topics that were prerequisites for those, etc., and that's how the Sequences came to be. So in that sense, "LW is more and more full of detailed posts about AI that newcomers can't follow easily" is a sign that everything is going as intended, and yes, it really is important to read a lot of the prerequisite background material if you want to participate in that part of the discussion.
On the other hand, if you want a broader participation in the parts of the community that are about individual and collective rationality, that's still here too! You can read the Sequence Highlights, or the collections of resources listed by CFAR, or everything else in the Library. And if there's something you want to ask or discuss, make a post about it, and you'll most likely get some good engagement, or at least people directing you to other places to investigate or discuss it. There are also lots of other forums and blogs and substacks with current or historical ties to LW that are more specialized, now that the community is big enough to support that. The diaspora/fragmentation will continue for many of the same reasons we no longer have Natural Philosophers.
I was naive during the period in which I made this particular post. I'm happy with the direction LW is going in, having experienced more of the AI world, and read many more posts. Thank you for your input regardless.