This could work with somekind of human-machine connection as well. I remember reading a paper in computational neuroscience, where they hooked an eels brain to a simple machine and created a loop of machineinput - eelinput - eeloutput - machineoutput. So the eel received perceptual information from the robot and then gave actions to the robot to move.
What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.