Note: This was posted recently as well https://www.lesswrong.com/posts/H2N6eWw8JgxwKHPM3/linkpost-robin-hanson-why-not-wait-on-ai-risk.
Arg. I did look whether that was the case by using the LW search for "Robin Hanson" but it didn't turn it up in the first pages of the results.
One of Hanson's points is
As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.
which seems to be exactly what Eliezer is saying, so the crux of the disagreement is about this. Hanson calls it a "postulate" while Eliezer claims to derive it from rather general principles.
I have no firm view on the topic, and the expert opinions seem to differ quite a bit.
UPDATE: This is an accidental duplicate. Please go to the first posting here.
Since the AI-Foom Debate Robin Hanson has been quiet for a while but now:
After having analyzed multiple widely expressed AI concerns from an economical angle and comes to the conclusion:
There is much more in it but take this as a TL;DR.