Fucked.
I'm not sure I would agree with that. Would you mind telling me why you think so?
To my mind, whoever is the first to develop AI has a good chance of having an awesome amount of power. I am not comfortable with anyone having that kind of power, but if I had to pick one person or organization, I would probably pick the United States government.
Would you mind telling me why you think so?
An AI developed by the military (or the security apparatus) will have the goals of the military (or the security). And being developed within the bureaucratic labyrinths of a federal organization ensures that there will be many things wrong with it, things about which no one will know (and so cannot suggest fixing) because it all will be very very secret.
Cross-posted from my blog.
Yudkowsky writes:
My own projection goes more like this:
At least one clear difference between my projection and Yudkowsky's is that I expect AI-expert performance on the problem to improve substantially as a greater fraction of elite AI scientists begin to think about the issue in Near mode rather than Far mode.
As a friend of mine suggested recently, current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect elite acknowledgement of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because AGI is tougher to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")
Still, there is a worryingly non-negligible chance that AGI explodes "out of nowhere." Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is sped up by 20 orders of magnitude with a single breakthrough.