Since so many people here (myself included) are either working to reduce AI risk or would love to enter the field, it seems worthwhile to ask what are the best arguments against doing so. This question is intended to focus on existential/catastrophic risks and not things like technological unemployment and bias in machine learning algorithms.
You are risking to oversaturate the AGI research market relative to other life-changing potential technologies, such as engineered life extension or organizational mechanism design.
If the amplified human could take over the world but hasn't because he's not evil, and predicts that this other AI system would do such evil, yes.
It's plausible, though, that the decision theory used by the new AI would tell it to act predictably non-evilly, in order to make the amplified human see this coming and not destroy the new AI before it's turned on.
Note that this amplified human has thereby already taken over the world in all but name.