If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I updated my earlier comment to say "against AI x-risk positions" which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don't hold up in the real world.
And yes, I think more LW'ers and AI x-risk people should read and respond to Goertzel's super-intelligence article. I don't agree with it 100%, but there are some valid points in there. And one doesn't become effective by only reading viewpoints you agree with...