In July, Ben Garfinkel scrutinized the classic AI Risk arguments in a 158 minute-long interview with 80000 hours, which I strongly recommend.
I have formulated a reply, and recorded 80 minutes of video, as part of two presentations in the AISafety.com Reading Group:
196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments
197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2
I strongly recommend turning subtitles on. Also consider increasing the playback speed.
"I have made this longer than usual because I have not had time to make it shorter."
-Blaise Pascal
The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:
-
Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.
-
Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.
-
Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.
tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)
On the documents:
Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.
(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.