Yeah. I think they have different kinds of errors. We are using human judgement to say that AIs should not make errors that humans would not make. But we do not appreciate the times when they do not make errors that we would make.
Humans might be more reliable at driving cars than AIs overall. But human reliability is not a superset of AI. It's more like an intersection. If you look at car crashes made by AIs, they might not look like something that human would cause. But if you look at car crashes of humans, they might also not look like something that AIs would cause.
Yeah. I think they have different kinds of errors. We are using human judgement to say that AIs should not make errors that humans would not make. But we do not appreciate the times when they do not make errors that we would make.
Humans might be more reliable at driving cars than AIs overall. But human reliability is not a superset of AI. It's more like an intersection. If you look at car crashes made by AIs, they might not look like something that human would cause. But if you look at car crashes of humans, they might also not look like something that AIs would cause.