On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here [as of July 14]).
I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.
Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.
[...]
It thus seems quite unlikely that one AI team could find an architectural innovation powerful enough to let it go from tiny to taking over the world within a few weeks.
A while back I posted this minimalist account of Eliezer's case for the importance of FAI to human survival. (Claim B technically seems too specific if you want to talk about the existential risk as a whole, but I think it reflects his view.)
So far I can't tell if you agree that each claim has easily more than .5 probability given the evidence, nor if you share my view that Claim A as separate from the rest has P close to 1. In particular, you said here that you believe:
By the same principle, the speed or slowness of FOOM doesn't matter in the long run unless some force with the power to stop it does so, and unless this happens every single time someone creates an unFriendly AI with the power to self-modify. I have almost no confidence that humanity in general will learn from past mistakes (and precious little confidence in the subset that could write the second or third AGI). So I think we need to look at the cumulative chance for Claim B, Claim C, and perhaps even D.
Even so, it seems possible that the actual risk stays within 5%. Maybe you think some form of FAI, such as Friendly uploads, will prove easy once we get the capacity for some form of AGI. Maybe you think we seem likely to kill ourselves with X before then. Maybe you think some other force(s) will stop each and every AGI. If so, I'd like to hear your reasoning.
And if not, if you want to argue against my claims in some other way, please do so without identifying them with a more specific storyline.
ETA: I apparently forgot how to use links. I believe this means I should go eat or sleep. Take that as you will.
The whole dispute is about your claim A. It gives lot of credence to Y's idea of where things are headed (someone is going to write a single AI that takes over the world) and none to H's (someone is going to upload some humans and make trillions of copies). Those are two very different possibilities with different consequences, and there's no reason to believe it's close to an exhaustive list of plausible scenarios.