Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

A little addendum: This excerpt from the website announcing Ilya's new AI safety company illustrates your points well I think: "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs."

As far as I know, the answer is simply that you have to model the social landscape around you and how your research contributions are going to be applied.

In other words, it matters who receives your ideas, and what they choose to do with those ideas, even when your ideas are technical advances in AI safety or "alignment". 


Like others I agree. 

Some of what you're saying I interpret as an argument for increased transdisciplinarity in AI (safety) research. It's happening, but we would all likely benefit from more. By transdisciplinarity I mean collaborative work that transcends disciplines and includes non-academic stakeholders. 

My technical background is climate science not AI, but having watched that closely, I’d argue it’s a good example of an (x/s)risk-related discipline that (relative to AI) is much further along in the process of branching out to include more knowledge systems (transdisciplinary research). 

Perhaps this analogy is too locally specific to Australia to properly land with folks, but AI-safety does not have the climate-safety equivalent of "cultural burning" and probably won't for some time. But it will eventually (have to – for reasons you allude to). 

Some of what you’re saying, especially the above quoted, I think relates deeply to research ethics. Building good models of the social landscapes potentially impacted by research is something explored at depth across many disciplines. If I can speak frankly, the bleeding edge of that research is far removed from Silicon Valley and its way of thinking and acting.