This isn't a progress vs luddite debate - the fact that the human element of a automation+overseer performs worse than if the human were entirely in charge, is not a general argument against automation (at most, it might be an argument against replacing a human with an automation+overseer model if the gains are expected to be small).
The fact that humans can exercise other skills (pilots apparently do a lot when the autopilot is engaged) does not negate the fact they lose skills when it comes to taking over from the automation.
The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn't even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured ...
Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".
But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.
There seems to be three factors at work here:
So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.
And they'll typically get blamed for getting it wrong.
Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.