As some of you will know, the Future of Humanity Institute is looking for researchers. I want to apply to the Foundational Deep Future and AI Governance positions, and as part of the procedure, I am asked to write a research proposal. There are a number of ideas I'm having, but it's hard for me to know which idea is the most relevant. Therefore I'm asking your feedback. The ideas are in the document below, you should be able to comment in the document, or as a reply of course.

Thanks for helping out!

https://docs.google.com/document/d/1vXhclr9Vp28EY4VkOUZZitSootwtCTxCQLrRIBkluCU/edit?usp=sharing

New Comment
3 comments, sorted by Click to highlight new comments since:

I'm not sure how to take the right mix of my perspective, your perspective, and FHI's perspective.

For example, there's not much related to object-level understanding of AI safety. If I was writing a research proposal for myself, this would be a problem. But it is in fact you writing a research proposal for FHI, and I'm actually quite confident that FHI likes meta-level work.

To be more fancy, you could add references (maybe just in footnotes) to papers and books.

The strongest part of the proposal is the questions related to AGI skepticism. What I think you could do to improve this is to not merely present a list of questions, but to also give some things you might like to do to answer those questions empirically.

The second-most interesting bit to me is the Personal Strategies In The AGI Century section. Again, you could expend this with more interesting questions, and then expand those questions with concrete ways to answer them.

I would put the strongest subsections first in their section.

Thanks Charlie! :)

They are asking for only one proposal, so I will have to choose one and am planning to work out that one. So I'm mostly asking about which idea you find most interesting, rather than about which one is the strongest proposal now - that will be worked out. But thanks a lot for your feedback so far - that helps!