At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...
Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/
I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.
I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html
On the same note, but probably already widely known, Scott Aaronson on "The Signularity Is Far" (2008): http://www.scottaaronson.com/blog/?p=346