I see LessWrong is currently obsessed with AI Alignment. I spoke with some others on the unofficial LessWrong discord, and agreed that LessWrong is becoming more and more specialised, thus scaring off any newcomers who aren't interested in AI.
That aside. I'm genuinely curious. Do any of the posts on LessWrong make any difference in the general psychosphere of AI alignment? Does anyone who has actual control on the direction of AI and LLM's follow LessWrong? Does Sam Altman or anyone at OpenAI engage with LessWrongers?
Not being condescending here. I'm just asking this since there's two (2) important things to note: (1) Since LessWrong has very little focus on anything other than AI at the moment, are these efforts meaningful? (2) What are some basic beginner resources someone can use to understand the flood of complex AI posts currently on the front page? (Maybe I'm being ignorant, but I haven't found a sequence dedicated to AI...yet.)
Ah okay, thanks. I wasn't aware of the Alignment Forum, I'll check it out.
I don't disagree that informal forums are valuable. I take Jacque Ellul's belief in Technological Society that science firms held by monopolies tend to have their growth stunted for exactly the reasons you pointed out.
I think it's more that places like LessWrong are susceptible to having the narrative around them warped (referencing the article about Scott Alexander). Though this is slightly off-topic now.
Lastly, I am interested in AI; I'm just feeling around for what the best way to get into it is. So thanks.