What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
- An improved version of Siri (itself an improved version of MS Clippy).
- A program to make Google text ads that people will click on.
- As #2, but for spam.
- A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose.
- A program to make viral music videos from scratch (generating all images and music).
- An artificial programmer.
- A program to analyze huge amounts of data looking for 'threats to national security.'
- Uploads.
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
How did we develop that knowledge? Did nobody use it to make tons of money before it was knowably sufficient to create AGI?
The vast majority of the work will be done not for immediate personal gain, but for the same reason most other research gets done. As we get closer, things will probably get more volatile, but whether we get academia all the way to the finish line or some sort of government arms race or something in between, I think it's most likely that AGI will be created qua AGI.