We look at the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening.
I think MIRI and CSER may be naturally understood as organisations trying to reduce existential risk and increase existential hope respectively (although if MIRI is aiming to build a safe AI this is also seeking to increase existential hope). What other world states could we aim for that increase existential hope?
I'm pleased to announce Existential Risk and Existential Hope: Definitions, a short new FHI technical report.