I often talk to developers who prefer not destroying the world by accident (specifically by accelerating AGI risk), but neither them nor me can decide if specific companies qualify for this.
Could someone knowledgable help? A few short replies could probably change someone's career decisions
Can you help with future questions?
Please subscribe to the this comment. I'll reply to it only when there's a new open question.
Thank you!
Adding: Reply anonymously here
My sense is that this "they'll encourage higher ups to think what they're doing is safe" thing is a meme. Misaligned AI, for people like Yann Lecunn, is not even a consideration; they think it's this stupid uninformed fearmongering. We're not even near the point that Phillip Morris is, where tobacco execs have to plaster their webpage with "beyond tobacco" slogans to feel good about themselves - Demis Hassabis literally does not care, even a little bit, and adding alignment staff will not affect his decision making whatsoever.
But shouldn't we just ask Rohin Shah?