Sami Ursa
Sami Ursa has not written any posts yet.

As far as I know, the answer is simply that you have to model the social landscape around you and how your research contributions are going to be applied.
In other words, it matters who receives your ideas, and what they choose to do with those ideas, even when your ideas are technical advances in AI safety or "alignment".
Like others I agree.
Some of what you're saying I interpret as an argument for increased transdisciplinarity in AI (safety) research. It's happening, but we would all likely benefit from more. By transdisciplinarity I mean collaborative work that transcends disciplines and includes non-academic stakeholders.
My technical background is climate science not AI, but having watched that closely,... (read more)
A little addendum: This excerpt from the website announcing Ilya's new AI safety company illustrates your points well I think: "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs."