A stub on a point that's come up recently.
If I owned a paperclip factory, and casually told my foreman to improve efficiency while I'm away, and he planned a takeover of the country, aiming to devote its entire economy to paperclip manufacturing (apart from the armament factories he needed to invade neighbouring countries and steal their iron mines)... then I'd conclude that my foreman was an idiot (or being wilfully idiotic). He obviously had no idea what I meant. And if he misunderstood me so egregiously, he's certainly not a threat: he's unlikely to reason his way out of a paper bag, let alone to any position of power.
If I owned a paperclip factory, and casually programmed my superintelligent AI to improve efficiency while I'm away, and it planned a takeover of the country... then I can't conclude that the AI is an idiot. It is following its programming. Unlike a human that behaved the same way, it probably knows exactly what I meant to program in. It just doesn't care: it follows its programming, not its knowledge about what its programming is "meant" to be (unless we've successfully programmed in "do what I mean", which is basically the whole of the challenge). We can't therefore conclude that it's incompetent, unable to understand human reasoning, or likely to fail.
We can't reason by analogy with humans. When AIs behave like idiot savants with respect to their motivations, we can't deduce that they're idiots.
Of course we can. What do you think a tablet of Prozac (or a cup of coffee) does?
In the same way there is clear connection between human wetware and what it does, and of course we can "exert power" about it. Getting back to AIs, the singularity is precisely AI going beyond "following its programming".
When we take Prozac, we are following our wetware commands to take Prozac. Similarly, when an AI reprograms itself, it does so according to its current programming. You could say that it goes beyond its original programming, in that it after it follows it it has new, better programming, but it's not as if it has some kind of free will that lets it ignore what it was programmed to do.
When a computer really breaks its programming, and quantum randomness results in what should be a 0 being read as a 1 or vice versa, the result isn't intelligence. The most likely result is the computer crashing.