On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here [as of July 14]).
I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.
Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.
[...]
It thus seems quite unlikely that one AI team could find an architectural innovation powerful enough to let it go from tiny to taking over the world within a few weeks.
It's not a brain in a box in a basement - and it's not one grand architectural insight - but I think the NSA shows how a secretive organisation can get ahead and stay ahead - if it is big and well funded enough. Otherwise, public collaboration tends to get ahead and stay ahead, along similar lines to those Robin mentions.
Google, Apple, Facebook etc. are less-extreme versions of this kind of thing, in that they keep trade secrets which give them advantages - and don't contribute all of these back to the global ecosystem. As a result they gradually stack up know-how that others lack. If they can get enough of that, then they will gradually pull ahead - if they are left to their own devices.
The issue of whether a company will eventually pull ahead is an issue which has quite a bit to do with anti-trust legislation - as I discuss in One Big Organism.
The issue of whether one government will eventually pull ahead is a bit different. There's no government-level anti-trust legislation. However expansionist governments are globally frowned upon.
I don't think there are too many other significant players besides companies and governments.
The "silver bullet" idea doesn't seem to be worth too much. As Eray says: "Every algorithm encodes a bit of intelligence". We know that advanced intelligence necessarily highly complex. You can't predict a complex world without being that complex yourself. Of course, human intelligence might be relatively simple - in which case it might only take a few leaps to get to it. The history of machine intelligence fairly strongly suggests a long gradual slog to me - but it is at least possible to argue that people have been doing it all wrong so far.