It's hard to find a perfect analogy here, but both analogies I mentioned lend support to my original claim in a similar way.
It may be that with the present state of math, one could cite a few established results and use them to construct a simple proof of P != NP, only nobody's figured it out yet. Analogously, it may be that there is a feasible way to take present-day software tools and use them to implement a working AGI. In both cases, we lack the understanding that would be necessary either to achieve the goal or to prove it impossible. However, what insight and practical experience we have strongly suggests that neither thing is doable, leading to conclusion that the present-day software tools likely are inadequate.
In addition to this argument, we can also observe that even if such a solution exists, finding it would be a task of enormous difficulty, possibly beyond anyone's practical abilities.
This reasoning doesn't lead to the same certainty that we have in problems involving well-understood physics, such as building airplanes, but I do think it's sufficient (when spelled out in full detail) to establish a very high level of certainty nevertheless.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.