On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here [as of July 14]).
I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.
Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.
[...]
It thus seems quite unlikely that one AI team could find an architectural innovation powerful enough to let it go from tiny to taking over the world within a few weeks.
For those of us not well schooled in these matters, can you explain or link to why it's a crazy idea? I am intuitively sympathetic to Yudkowsky, but emulations first doesn't seem obviously crazy.
I've been trying to put together a survey paper about why uploads coming first is not at all crazy. Whether it's more likely than local-super-AI-in-a-basement, I don't know and leave that to the experts. But brain emulation, as far as we currently understand it, is very similar to the problem posed by landing on the moon circa 1962 -- a matter of scaling it up (though we certainly have much more than a decade to go for mind emulation).
Ken Hayworth, now working in the Lichtman connectomics lab at Harvard, has recently written up such a survey paper to appea... (read more)