Aligned AGI is a large scale engineering task
Humans have never completed at large scale engineering task without at least one mistake
An AGI that has at least one mistake in its alignment model will be unaligned
Given enough time, an unaligned AGI will perform an action that will negatively impact human survival
Humans wish to survive
Therefore, humans ought not to make an AGI until one of the above premises changes.
This is another concise argument around AI x-risk. It is not perfect. What flaw in this argument do you consider the most important?
OK I take your point. In your opinion would this be an improvement "Humans have never completed at large scale engineering task without at least one mistake on the first attempt"?
For the argument with AI, will the process that is used to make current AI scale to AGI level? From what I understand that is not the case. Is that predicted to change?
Thank you for giving feedback.