Wow, this is long, and seems pretty detailed and interesting. I'd love to see someone write a selection of key quotes or a summary.
There is now also an interview with Critch here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/
Really enjoyed reading this. The section on "AI pollution" leading to a loss of control about the development of prepotent AI really interested me.
Avoiding [the risk of uncoordinated development of Misaligned Prepotent AI] calls for well-deliberated and respected assessments of the capabilities of publicly available algorithms and hardware, accounting for whether those capabilities have the potential to be combined to yield MPAI technology. Otherwise, the world could essentially accrue “AI-pollution” that might eventually precipitate or constitute MPAI.
Andrew Critch's (Academian) and David Krueger's review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.