What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
- An improved version of Siri (itself an improved version of MS Clippy).
- A program to make Google text ads that people will click on.
- As #2, but for spam.
- A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose.
- A program to make viral music videos from scratch (generating all images and music).
- An artificial programmer.
- A program to analyze huge amounts of data looking for 'threats to national security.'
- Uploads.
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
The ancient prophecies of paperclip maximizers seem to point towards #1.
But it seems to me that #4 has the greatest incentive to be general, to work across many different domains, because you have many different kinds of companies on stock market. -- For example, if we look at nanotechnology, #2, #3 and #5 need to be able to compose a short interesting story about nanotechnology. But that's about human psychology; the story has to be interesting, not realistic. On the other hand, #4 needs to be able to look at a company that claims to produce nanotechnology and evaluate whether their projects are realistic or just nice-sounding nonsense. (#6 and #7 also feel too narrow.) -- Of course, "having an incentive" is not the same as "having the problem solved".
The battle between #4 (or other general machine) and #8 will probably depend on the state of hardware, our knowledge of neurobiology, and our knowledge of intelligent algorithms. If we will understand "the essence of intelligence" formally enough, we may be able to write an intelligent code. However, if we will not get much close to useful formal definitions, but we will have insanely poweful hardware and we will know which parts of human brain physiology are important, the uploads may be first. -- Note that the brain uploads may not be recursively self-improving, so we may get uploads first, but some de novo AGI may still surpass them later.