The timing of this post is quite serendipitous for me. Much of what you wrote resonates heavily. First comment on LW, by the way!
I'm deeply interested in the technical problems of alignment and have recently read through the AI Safety Fundamentals Course. I'm looking for any opportunity to discuss these ideas with others. I've been adjacent to the rationalist community for a few years (a few friends, EA, ACX, rationalism, etc.), but the need to sanity check my own thoughts on alignment has made engaging with the LW community seem invaluable.
I have found th...
Thanks so much for the thoughtful response. I'll certainly reach out and try to participate in BlueDot Impact's course now that I'm more familiar with the content, and will stay on the lookout for anything you document as you go through your own journey! Even just a few of the names and resources so far have been incredibly valuable pointers to the right corners of the internet.
I don't have karma yet, but if I did, I'd gladly open my wallet :)