Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.
A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage. If you're told outright that your arm is paralyzed, you'll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:
After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.
And it really makes you wonder...
...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyzed? Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as "It's my daughter's arm" - only there's no sane doctor watching to pursue the argument any further. (Would we all come up with the same excuse?)
If the "absolute denial macro" is that simple, and invoked that easily...
Now, suppose you built an AI. You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the "absolute denial macro" - there's no point damage that could inflict on it the equivalent of anosognosia. It has redundant differently-architected systems, defending in depth against cognitive errors. If one system makes a mistake, two others will catch it. The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics. Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth. And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.
Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.
And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...
...what?
What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?
(Some of my own answers appear in the comments.)
Well, kidding aside, your argument, taken from Pearl, seems elegant. I'll however have to read the book before I feel entitled to having an opinion on that one, as I haven't grokked the idea, merely a faint impression of it and how it sounds healthy.
So at this point, I only have some of my own ideas and intuitions about the problem, and haven't searched for the answers yet.
Some considerations though :
Our idea of causality is based upon a human intuition. Could it be that it is just as wrong as vitalism, time, little billiard balls bumping around, or the yet confused problem of consciousness ? That's what would bug me if I had no good technical explanation, one provably unbiased by my prior intuitive belief about causality (otherwise there's always the risk I've just been rationalizing my intuition).
Every time we observe "causality", we really only observe correlations, and then deduce that there is something more behind those. But is that a simple explanation ? Could we devise a simpler consistent explanation to account for our observation of correlations ? As in, totally doing away with causality ? Or at the very least, redefining causality as something that doesn't quite correspond to our folk definition of it ?
Grossly, my intuition, when I hear the word causality is something along the lines of
" Take event A and event B, where those events are very small, such that they aren't made of interconnected parts themselves - they are the parts, building blocks that can be used in bigger, complex systems. Place event A anywhere within the universe and time, then provided the rules of physics are the same each time we do that, and nothing interferes in, event B will always occur, with probability 1, independantly of my observing it or not." Ok, so could (and should ?) we say that causality is when a prior event implies a probability of one for a certain posterior event to occur ? Or else, is it then not probability 1, just an arbitrarily very high probability ?
In the latter case with less than 1 probability, then that really violates my folk notion of causality, and I don't really see what's causal about a thing that can capriciously choose to happen or not, even if the conditions are the same.
In the former case, I can see how that would be a very new thing, I mean, probability 1 for one event implying that another will occur ? What better, firmer foundation to build an universe upon ? It feels really, very comfortable and convenient, all too comfortable in fact.
Basically, neither of those possibilities strike me as obviously right, for those reasons and then some, the idea I have of causality is confused at best. And yet, I'd say it is not too unsophisticated or pondered as it stands. Which makes me wonder how people who'd have put less thought in it (probably a lot of people) can deservedly feel any more comfortable with saying it exists with no afterthought (almost everyone), even as they don't have any good explanation for it (which is a rare thing), such as perhaps the one given by Pearl.