I often talk to developers who prefer not destroying the world by accident (specifically by accelerating AGI risk), but neither them nor me can decide if specific companies qualify for this.
Could someone knowledgable help? A few short replies could probably change someone's career decisions
Can you help with future questions?
Please subscribe to the this comment. I'll reply to it only when there's a new open question.
Thank you!
Adding: Reply anonymously here
Given that we currently don't know how to build aligned AI, solving the AI Alignment problem by definition is going to require research that pushes the bounds of artificial intelligence. The advice you're giving is basically that anyone concerned about AI Alignment should self-select out of doing that research. Which seems like the opposite of help.