I often talk to developers who prefer not destroying the world by accident (specifically by accelerating AGI risk), but neither them nor me can decide if specific companies qualify for this.
Could someone knowledgable help? A few short replies could probably change someone's career decisions
Can you help with future questions?
Please subscribe to the this comment. I'll reply to it only when there's a new open question.
Thank you!
Adding: Reply anonymously here
This is an extraordinarily vague statement that is technically true but doesn't imply anything you seem to think it means. There's a fairly clear venn diagram between alignment research and capabilities research. On one side of the diagram is most things that make OpenAI more money and on the other side is Paul Christiano's transparency stuff.
If it's the research that burns the capabilities commons while there's lots of alignment tasks left to be done, or people to convince, then yes, that seems prudent.