The idea is autonomy.
Presumably there's a difference between some software we are willing to call an AI (superintelligent or not) and plain old regular software. The plain old regular software indeed just "follows its programming", but then you don't leave it to manage a factory while you go away and its capability to take over neighbouring countries is... limited.
It really boils down to how do you understand what an AI is. Under some understandings the prime characteristic of an AI is precisely that it does NOT "follow its programming".
The AI follows its programming because the AI is its programming.
Presumably there's a difference between some software we are willing to call an AI (superintelligent or not) and plain old regular software.
The plain old regular software follows its programming which details object level actions it takes to achieve its purpose, which the software itself cannot model or understand.
An AI would follow its programming which details meta level actions to model and understand its situation, consider possible actions it could take and the consequences, and eval...
A stub on a point that's come up recently.
If I owned a paperclip factory, and casually told my foreman to improve efficiency while I'm away, and he planned a takeover of the country, aiming to devote its entire economy to paperclip manufacturing (apart from the armament factories he needed to invade neighbouring countries and steal their iron mines)... then I'd conclude that my foreman was an idiot (or being wilfully idiotic). He obviously had no idea what I meant. And if he misunderstood me so egregiously, he's certainly not a threat: he's unlikely to reason his way out of a paper bag, let alone to any position of power.
If I owned a paperclip factory, and casually programmed my superintelligent AI to improve efficiency while I'm away, and it planned a takeover of the country... then I can't conclude that the AI is an idiot. It is following its programming. Unlike a human that behaved the same way, it probably knows exactly what I meant to program in. It just doesn't care: it follows its programming, not its knowledge about what its programming is "meant" to be (unless we've successfully programmed in "do what I mean", which is basically the whole of the challenge). We can't therefore conclude that it's incompetent, unable to understand human reasoning, or likely to fail.
We can't reason by analogy with humans. When AIs behave like idiot savants with respect to their motivations, we can't deduce that they're idiots.