Good work.
Alternatively, one might construe the argument this way:
But this may be a less useful structure than the more detailed one you propose. My version simply packs more sub-arguments and discussion into each premise.
The premises (in your argument) that I feel least confident about are #1, #2, and #4.
Premise #2 seems very likely to me. Can you provide me with reasons why it wouldn't be likely?
Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences.
Here is one way you could construe one version of the argument for the Singularity Institute's "Big Scary Idea":
My questions are: