What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
- An improved version of Siri (itself an improved version of MS Clippy).
- A program to make Google text ads that people will click on.
- As #2, but for spam.
- A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose.
- A program to make viral music videos from scratch (generating all images and music).
- An artificial programmer.
- A program to analyze huge amounts of data looking for 'threats to national security.'
- Uploads.
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
It's worth noting that if AGI comes from something Siri, it is likely to be friendly, since the marketplace will select friendly agents. That is almost an arguemnt against building singleton AGI in an isolated lab..why trow away existing advances in friendliness #?
The marketplace selects friendly-looking agents. Those friendly-looking agents not infrequently go on to mine your personal data and sell it to advertisers, or sell cars with known dangerous defects having calculated that the extra profit from cutting corners exceeds the likely losses from lawsuits from the families of the people killed, or persuade you to get a mortgage you can't afford to repay, or sell you wine containing poisonous chemicals that taste nice.
I don't find that process so reliably friendly that I feel good about having it creating superintelligent agents.