Let's steelman his argument into "Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?". If you find another reason why the first option wouldn't work, include the desperate effort needed to overcome that problem in the calculation.
Me minutes after writing that: "I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research."
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.