This is a linkpost for https://grants.futureoflife.org/
Are astronomical suffering risks (s-risk) considered a subset of existential risks (x-risk) because they "drastically curtail humanity’s potential"? Or is this concern not taken into account for this research program?
I suppose that types of s-risks that did drastically curtail humanity's potential would count, but s-risks that don't have that issue (e.g. humanity decides to suffer massively, but still has the potential to do lots of other things) would not.
Epistemic status: describing fellowships that I am helping with the administration of.
Edit 2021-10-04: Modified to reflect changed eligibility+stipend conditions.
The Future of Life Institute is launching new PhD and postdoctoral fellowships to study AI existential safety: that is, research that analyzes the most probable ways in which AI technology could cause an existential catastrophe, and which types of research could minimize existential risk; and technical research which could, if successful, assist humanity in reducing the existential risk posed by highly impactful AI technology to extremely low levels.
The Vitalik Buterin PhD Fellowship in AI Existential Safety is targeted at students applying to start their PhD in 2022, or existing PhD students who would not otherwise have funding to work on AI existential safety research. Quoting from the page:
Applications for the PhD fellowship close on October the 29th.
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is for postdoctoral appointments starting in fall 2022. Quoting from the page:
Applications for the postdoctoral fellowship close on November the 5th.
You can apply at grants.futureoflife.org, and if you know people who may be good fits, please help spread the word!