Aligned AGI is a large scale engineering task
Humans have never completed at large scale engineering task without at least one mistake
An AGI that has at least one mistake in its alignment model will be unaligned
Given enough time, an unaligned AGI will perform an action that will negatively impact human survival
Humans wish to survive
Therefore, humans ought not to make an AGI until one of the above premises changes.
This is another concise argument around AI x-risk. It is not perfect. What flaw in this argument do you consider the most important?
You mean: how to balance a low-probability x-risk with a high-probability of saving a large number/small fraction of human children? Good point it’s hard, but we don’t actually need this orange-apple comparison: the point is that AGI may well decrease overall x-risks.
(I mentioned starving children because some count large impacts as x-risk, but on second thought it was probably a mistake)