On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here [as of July 14]).
I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.
Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.
[...]
It thus seems quite unlikely that one AI team could find an architectural innovation powerful enough to let it go from tiny to taking over the world within a few weeks.
Your reply is focused on keeping secrets. I meant my comment to apply to the second claim - the one about governments being "too stupid". That claim might be right - but it is not obvious. Government departments focused on this sort of thing (of which there are several) will understand - and no doubt already understand. The issue is more whether the communication lines are free, whether the top military brass take thier own boffins seriously - and whether they go on to get approval from head office.
As for secrecy - the NSA has a long history of extreme secrecy. The main reason most people don't know about their secret tech projects is because their secrecy is so good. If they develop a superintelligence, I figure it will be a secret one that will probably remain chained up in their basement. They are the main reason, my graph has some probability mass already.