What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
- An improved version of Siri (itself an improved version of MS Clippy).
- A program to make Google text ads that people will click on.
- As #2, but for spam.
- A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose.
- A program to make viral music videos from scratch (generating all images and music).
- An artificial programmer.
- A program to analyze huge amounts of data looking for 'threats to national security.'
- Uploads.
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
I have no science, only science fiction, ideas of how it could be done. What I am thinking of are two or more people who are communicating without speech, writing, gesture, eye contact, or in other conventional ways. Instead, a thought in one person's body is shared / perceived in another person's body. I think of a red fire truck and either you know I'm thinking of a red fire truck or you also think of a red fire truck, by some human-created non-conventional way. I can only guess it would be partially direct wiring between brains, partially sensors that detect and transmit / reproduce chemical and electrical changes in brains. I know some small amount of brain monitoring and brain wiring is possible now, but I make no claim a full brain to brain dialogue can ever happen. I'd like it to, maybe it will, I do not claim to know.
If there a machine that determine that a person thinks of a red fire truck and then stimulates the neurons in the brain of another person, that's not direct. The machine is in the middle.
The machine needs an internal language in which it can model "red fire truck", be able to recognize that in Alice by looking at neuron firing pattern and then have a model of what neuron firing would likely to have the effect of something like a "red fire truck" be perceived by another person.
Given those translation issues of those two changes of represenation systems I don't see why I would call the process "direct".