Since so many people here (myself included) are either working to reduce AI risk or would love to enter the field, it seems worthwhile to ask what are the best arguments against doing so. This question is intended to focus on existential/catastrophic risks and not things like technological unemployment and bias in machine learning algorithms.
Ok, so should amplified human try to turn off first the agent-foundation-based project which is going to turn off this human if complete?