On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here [as of July 14]).
I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.
Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.
[...]
It thus seems quite unlikely that one AI team could find an architectural innovation powerful enough to let it go from tiny to taking over the world within a few weeks.
The main thing I can recall from from the 2008 debate mentioned was Hanson's position being essentially destroyed by Hanson via support that made no sense, making Eliezer largely redundant.
As far as I can tell Hanson does not disagree with Yudkowsky except for the probability of risks from AI. Yudkowsky says that existential risks from AI are not under 5%. Has Yudkowsky been able to support this assertion sufficiently? Hanson only needs to show that it is unreasonable to assume that the probability is larger than 5% and my personal perception is that he was able to do so.
I have already posted various arguments for why I believe that the case for risks... (read more)