At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...
There exists a technological plateau for general intelligence algorithms, and biological neural networks already come close to optimal. Hence, recursive self-improvement quickly hits an asymptote.
Therefore, artificial intelligence represents a potentially much cheaper way to produce and coordinate intelligence compared to raising humans. However, it will not have orders of magnitude more capability for innovation than the human race. In particular, if humans are unable to discover breakthroughs enabling vastly more efficient production of computational substrate, then artificial intelligence will likewise be unable. In that case, unfriendly AI poses an existential threat primarily through dangers that we can already imagine, rather than unanticipated technological breakthroughs.