IN particular, the AI might be able to succeed at this.
It seems to me possible that the AI might come up with even more 'insane' ideas that have even less apparent connection to what it was programmed to do.
Since my knowledge of AI is for practical purposes zilch- for large numbers of hypothetical future AI, if perhaps not a full Friendly AI, wouldn't it be a simple solution to program the AI to model a specified human individual, determine said individual's desires, and implement them?
it follows its programming, not its knowledge about what its programming is "meant" to be (unless we've successfully programmed in "do what I mean", which is basically the whole of the challenge).
Not necessarily. The instructions to a fully-reflective AI could be more along the lines of “learn what I mean, then do that” or “do what I asked within the constraints of my own unstated principles.” The AI would have an imperative to build a more accurate internal model of your psychology in order to predict the implicit constraints applie...
A stub on a point that's come up recently.
If I owned a paperclip factory, and casually told my foreman to improve efficiency while I'm away, and he planned a takeover of the country, aiming to devote its entire economy to paperclip manufacturing (apart from the armament factories he needed to invade neighbouring countries and steal their iron mines)... then I'd conclude that my foreman was an idiot (or being wilfully idiotic). He obviously had no idea what I meant. And if he misunderstood me so egregiously, he's certainly not a threat: he's unlikely to reason his way out of a paper bag, let alone to any position of power.
If I owned a paperclip factory, and casually programmed my superintelligent AI to improve efficiency while I'm away, and it planned a takeover of the country... then I can't conclude that the AI is an idiot. It is following its programming. Unlike a human that behaved the same way, it probably knows exactly what I meant to program in. It just doesn't care: it follows its programming, not its knowledge about what its programming is "meant" to be (unless we've successfully programmed in "do what I mean", which is basically the whole of the challenge). We can't therefore conclude that it's incompetent, unable to understand human reasoning, or likely to fail.
We can't reason by analogy with humans. When AIs behave like idiot savants with respect to their motivations, we can't deduce that they're idiots.