[copying from my comment on the EA Forum x-post]
For reference, some other lists of AI safety problems that can be tackled by non-AI people:
Luke Muehlhauser's big (but somewhat old) list: "How to study superintelligence strategy"
AI Impacts has made several lists of research problems
Wei Dai's, "Problems in AI Alignment that philosophers could potentially contribute to"
Kaj Sotala's case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psychological research)
I think systems engineering is a candidate for this, at least as far as the safety and meta sections go.
There is a program at MIT for expanding systems engineering to account for post-design variations in the environment, including specific reasoning about a broader notion of safety:
Systems Engineering Advancement Research Initiative
There was also a DARPA program for speeding up the delivery of new military vehicles, which seems to have the most direct applications to CAIS:
Systems Engineering and the META Program
Among other things, systems engineering has the virtue of making hardware an explicit feature of the model.
I think there are many questions whose answers would be useful for technical AGI safety research, but which will probably require expertise outside AI to answer. In this post I list 30 of them, divided into four categories. Feel free to get in touch if you’d like to discuss these questions and why I think they’re important in more detail. I personally think that making progress on the ones in the first category is particularly vital, and plausibly tractable for researchers from a wide range of academic backgrounds.
Studying and understanding safety problems
Solving safety problems
Forecasting AI
Meta
Particular thanks to Beth Barnes and a discussion group at the CHAI retreat for helping me compile this list.