Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".

But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.

There seems to be three factors at work here:

  1. Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.
  2. This goes along with a general deskilling of the overseer. When the autopilot controls the plane for most of its trip, pilots get far less hands-on experience of actually flying the plane. Paradoxically, less efficient automation can help with both these problems: if the system fails 10% of the time, the overseer will watch and understand it closely.
  3. And when the automation does fail, the overseer will typically lack situational awareness of what's going on. All they know is that something extraordinary has happened, and they may have the (possibly flawed) readings of various instruments to guide them – but they won't have a good feel for what happened to put them in that situation.

So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.

And they'll typically get blamed for getting it wrong.

Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.

New to LessWrong?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 11:04 AM

The complacency and deskilling are a feature, not a bug. The less I have to learn to get from place to place, the more attention I have for other things that can't be automated (yet).

Attributing to a GPS faillure a woman driving 900 miles to croatia when she intended to drive 38 miles within Belgium is naive. Most likely she put the wrong address in, possibly with the help of autocomplete, possibly not. But crazy, drug-addled, and or senile people have been winding up hundreds of miles from where they thought they were for a long time before there were any GPS satellites in orbit. Actual GPS errors in my experience take you to a street behind your inou tended destination, or direct you to streets that are closed. And these errors fall off quickly as the expert system becomes, well, more expert. The GPS navigation app errors tend to be small, bringing you near where you need to go but then requiring some intelligence to realize how to fix the error the system has made. Meanwhile, I drove two hours out of my way on vacation in Florida, an error I could not have possibly made to that extent if I had had the GPS navigation systems I now use all the time.

Automated cars WILL be blamed for all sorts of problems including deaths. The unwashed innumerates will tell detailed stories about how they went wrong and be unmoved by the overall statistics of a system which will cause FEWER deaths per mile driven than do humans. Some of those deaths will occur in ways that after-the-fact innumerates, and other elements of the infotainment industry known as democracy, will tell wonderful anecdotes about them. There may even be congressional hearings and court cases. The idea that a few deaths that MIGHT have been avoided under the old regime is literally a small price to pay for an overall lower death rate will be too complex a concept to get legs in the infotainment industry.

But in the long run, the nerds will win, and economically useful automation will be broadly adopted. We don't know how to grow our own food or build our own houses anymore and we've gotten over that. We'll get over this too and the innumerate infotainment industry known as democracy will move on to its next stupidity.

This isn't a progress vs luddite debate - the fact that the human element of a automation+overseer performs worse than if the human were entirely in charge, is not a general argument against automation (at most, it might be an argument against replacing a human with an automation+overseer model if the gains are expected to be small).

The fact that humans can exercise other skills (pilots apparently do a lot when the autopilot is engaged) does not negate the fact they lose skills when it comes to taking over from the automation.

The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn't even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.

To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I'm wrong, I don't know much about self-driving car AI). So it's important that the person in the driver's seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.

And the risky areas are those where the transition period is very long.

[-][anonymous]11y110

I read your piece and replaced 'autopilot' with 'social structure' and it still works. When you use the autopilot of membership in a group, you get the same errors.

I seems like the curse of the gifted student is similar as well -- being naturally good-enough at the first 90% of the education makes you miss out on developing habits necessary for the last 10%.

This post reminds me of this essay, which I enjoyed, on the topic of automation and deskilling: http://www.macroresilience.com/2011/12/29/people-make-poor-monitors-for-computers/.

[-][anonymous]11y50

That was a good article! I also find it noteworthy that the sucessful example of humans recovering from a failure involved them extensively using checklists, particularly in reference to automation and deskilling in general.

Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.

There's a related problem in manufacturing whose name I've forgotten, but basically, the less frequent defective parts are, the less likely it is human quality control people will notice defective parts, because their job is more boring and so they're less likely to be paying attention when a defective part does happen. (Conditioned on the part being defective, of course.)

Right, one of the original solutions, though rarely implemented, is to add a steady stream of defective parts to guarantee optimal human attention. These artificially defective parts are marked in a way that lets them to be automatically separated and recycled later, should any slip by the human QA.

[-]maia11y20

Wow. That's an really cool example of careful design, taking humans into account as well as technical issues.

Yeah, I was equally impressed when one of my instructors at the uni explained the concept, some decades ago, as an aside while teaching CPU design.

They apparently do this in airport x-rays - inject an image of a bag with a gun, to see if the observer reacts.

But apparently not for keeping pilots alert in flight... A "Fuel pressure drop in engine 3!" drill exercise would probably not, umm, fly.

There might be other ways - you could at least do it on simulators, or even on training flights (with no passengers).

Surely they already do that. The trick is not knowing whether an abnormal input is a drill or not, or at least not knowing when a drill might happen. All these issues have been solved in the military a long time ago.

Knowing when a drill might happen improves alertness during the drill period only. Drills do develop and maintain the skills required to respond to a non-standard situation.

I've heard that in proof-reading, optimal performance is achieved when there are about 2 errors per page.

I've heard that when you play mouse-chasing-themed games with your cat, the maximal cat fun is achieved when there are between 1 and 2 successes for every 6 pounces.

Optimal performance may be maximized, but the output isn't.

I would be surprised if there were less overall errors in the final product if it started at 2 per page, rather than say 1/4 per page.

This is also valid against the suggestion in the OP. Although humans will catch more errors if there are more to begin with, that doesn't mean there will be less failures overall.

As I mentioned in my other comment, if some of the errors are injected to keep the attention at the optimal level, and then removed post-QA, the other errors are removed with better efficiency. As an added benefit, you get an automated and reliable metric of how attentive the proof-reader is.

[-]tgb11y70

Only the cruise control link is an actual comparison of automation+overseer versus just humans. The rest given are examples of automation+overseer failing but there are of course examples of just humans failing just as badly. Is there any further evidence of this phenomenon? In particular, is there evidence that the total success rate decreases as the success rate of the automation increases?

Well, if you're willing to extend automation to cover automatic pricing from a specific set of equations, then we have the recent financial crisis...

I wonder if it's possible to bring the success rate back up in QA conditions by requiring the identification of the candidate furthest from ideal within a given period, whether or not that is within tolerances. Of course, in some cases, that would completely negate the purpose of the automatic behavior.

Right, I don't understand what you're saying there. Can you develop it?

So you have a batch of things that need to pass muster. The failure mode presented above is that you'll get bored with just saying 'pass, pass, pass...'

The corrective proposed is to ask for the worst item, whether or not it passes, in addition to asking for rejects.

It would be something to think about while looking at a bunch of good ones, and would keep one in practice... if one tries. If you just fake it and no one can tell because they're all passes anyway, then it doesn't work.

It may also be useful to identify the best thing. The difference between the best and worst is probably a useful measure of quality control as well as ensuring the tests are general enough to detect good as well as bad.

If your process is good enough that this is a problem, then 'so good you can't tell it's not perfect' could well be the most common case. In any case, it's most important to concentrate the expertise around the border of OK and not.

Interesting. May be applicable to some of the situations we're studying...

Just look out that you don't end up picking out something that's not the worst, and think you're still doing a good job.

[-]Decius11y-20

The failure mode presented above is that you'll get bored with just saying 'pass, pass, pass...'

That looks like an ideal case for automation...

And then you miss the one in ten thousand that was no good.

If you are using humans to mass-test for a failure rate of 1/10,000 you are doing something wrong. Ship ten thousand units, let the end-user test them at the time of use/installation/storage, and ship replacement parts to the user who got a defective part. That way no one human gets bored with testing that part (though they might get bored with inspecting good parts in general)

Sounds great if failure is acceptable. I don't want my parachute manufacturer taking on that method, though.

Don't you demand that your parachute packer inspects it when he packs it? Especially given that more than zero parachutes will be damaged after manufacture but before first use.

I think that you're noticing that automation does not require that the overseer ever develop the skills required to perform the task manually; drivers don't have to learn how to maintain constant speed for hours on end, pilots don't have to develop the endurance to maintain altitude and heading. There is an element of skill atrophy, and of encouraging distractions, and the distractions are likely to result in worse immediate responses to the failure of automation; the skill (responding to emergencies) would have atrophied anyway.