This is the first article in my Bah-Humbug Sequence a.k.a. "Everything I Don't Like Is A Defect/Defect Equilibrium". Epistemic status: strong opinion weakly held, somewhat exaggerated for dramatic effect; I'm posting this here so that the ensuing discussion might help me clarify my position. Anyway, the time has now come for me to explain my overbearing attitude of cynicism towards all aspects of life. Why now, of all times? I hope to make that clear by the end.
You are asking me to believe a certain claim. There is a simple and easy thing you can do to prove its trustworthiness, and yet you have not done that. I am therefore entitled to [Weak Adversarial Argument] disregard your claim as of no evidentiary value / [Strong Adversarial Argument] believe the negation of your claim purely out of spite.
What's going on here? Are these valid arguments?
It may help to give some examples:
-
The Hearsay Objection - In a court of law, if a witness X tries to testify that some other person Y said Z, in trying to establish the truth of Z, the opposing side may object. This objection takes the form: "The opposition has brought in X to prove Z by way of the fact that Y said Z. But X is not the most reliable witness they could have called, because they could have summoned Y instead. If they were genuinely seeking the truth as to Z, they would have done so; and yet we see that they did not. Therefore I insist that X's testimony be stricken from the record."
-
The Cynical Cryptographer - My company's HR department emails me a link to an employee satisfaction survey. The email is quick to say "Your responses are anonymous", and yet I notice that the survey link contains a bunch of gobbledegook like
?id=2815ec7e931410a5fb358588ee70ad8b
. I think to myself: If this actually is anonymous, and not a sham to see which employees have attitude problems and should be laid off first, the HR department could have set up a Chaumian blind signature protocol to provably guarantee that my response cannot be linked to my name. But they didn't, and so I conclude that this survey is a sham, and I won't fill it out.
So, again, are these valid arguments? From a Bayesian perspective, not really:
-
X saying that Y said Z is not literally zero evidence of Z. If there is any chance >0 that X and Y are honest, then I must update at least somewhat towards the truth of Z upon hearing X's testimony.
-
I'm pretty sure they don't teach cryptography in business school. An honest HR department and a dishonest one have approximately equal likelihood (i.e. ε) of knowing what a "Chaumian blind signature" is and actually implementing it. Therefore, by Bayes' theorem, etc.
To steelman the Adversarial Argument, we should understand it not as an ordinary passive attempt to "rationally" form an accurate world-model, but rather as a sort of acausal negotiation tactic, akin to one-boxing on Newcomb's Problem. By adopting it, we hope to "influence" the behavior of adversaries (i.e. people who want to convince us of something but don't share our interests) towards providing stronger evidence, and away from trying to deceive us.
Or, to put it another way, the Adversarial Argument may not be valid in general, but by proclaiming it loudly and often, we can make it valid (at least in certain contexts) and thus make distinguishing truth and falsehood easier. Because the Hearsay Objection is enforced in court, lawyers who want to prove Z will either introduce direct witnesses or drop the claim altogether. And perhaps (we can dream!) if the Cynical Cryptographer argument catches on, honest HR departments will find themselves compelled to add Chaumian blind signatures to their surveys in order to get any responses, making the sham surveys easy to spot.
(Aside: Even under this formulation, we might accept the Weak Adversarial Argument but reject the Strong Adversarial Argument - by adopting a rule that I'll believe the opposite of what an untrustworthy-seeming person says, I'm now setting myself up to be deceived into believing P by a clever adversary who asserts ¬P in a deliberately sleazy way - whereupon I'll congratulate myself for seeing through the trick! Is there any way around this?)
Now, returning to the template above, the premise that "there is a simple and easy thing you can do to prove its trustworthiness" is doing a lot of work. Your adversary will always contend that the thing you want them to do (calling witness Y, adding Chaumian signatures, etc.) is too difficult and costly to reasonably expect of them. This may or may not be true, but someone who's trying to deceive you will claim such regardless of its truth, hoping that they can "blend in" among the honest ones.
At that point, the situation reduces to a contest of wills over who gets to grab how much of the surplus value from our interaction. What is my trust worth to you? How much personal cost will you accept in order to gain it?
We on LessWrong - at least, those who wish to communicate the ideas we discuss here with people who don't already agree - should be aware of this dynamic. There may have been a time in history when charismatic authority or essays full of big words were enough to win people over, but that is far from our present reality. In our time, propaganda and misinformation are well-honed arts. People are "accustomed to a haze of plausible-sounding arguments" and are rightly skeptical of all of them. Why should they trust the ideas on LessWrong, of all things? If we think gaining their trust is important and valuable, how much personal cost are we willing to accept to that end?
Or, backing up further: Why should you trust what you read here?
I disagree with the hearsay conclusion that you should update toward the truth of Z.
My first problem is that it's a straw objection. The actual objection is that while X can be further questioned to inspect in detail whether their testimony is credible, Y cannot. This immensely weakens the link between X's testimony and the truth of Z, and admitting such evidence would open up bad incentives outside the courtroom as well.
The next problem is that considering the chance that X and Y are truthful is only part of a Bayesian update procedure. If you have a strong prior of Y's testimony being reliable
but not that X's is[1], you should update away from the truth of Z. If both X and Y were correct in their statements, then Y would be a much stronger witness and should have been called by the lawyer. Now you have evidence that Y's testimony would have harmed the case for Z. It is straightforward but tedious to work through a Bayesian update for this. For example:Suppose priors are P(X's testimony is truthful) = 1/2, P(Y made a true statement about Z) = 9/10, independent of each other and Z. Let E be the event "the lawyer only calls X to give the testimony that Y said Z". This event is incompatible with XYZ, since Y should have been called. It is also incompatible with XYZ', XY'Z, and X'YZ since in these cases X would not testify that Y said Z (where primes ' are used to indicate negation). So
P(ZE) = P(X'Y'ZE) = P(E | X'Y'Z) P(X'Y'Z) < P(X'Y'Z) = 1/40,
P(Z'E) = P(XY'Z'E) + P(X'YZ'E) + P(X'Y'Z'E) > P(E|X'YZ') P(X'YZ') = P(E|X'YZ') 9/40.
If P(E|X'YZ') >= 1/9, then P(Z'|E) > P(Z|E) and you should update away from Z being true. That is, suppose Z was actually false, Y correctly said that it was false, and X is not truthful in their testimony. What is the probability that the lawyer only calls X to testify that Y said Z? It seems to me quite a lot greater than 1/9. The lawyer has incentive to call X, who will testify in support of Z for their client. They will probably attempt to contact Y, but Y will very likely not testify in support of Z. There are possibilities for P(E'|X'YZ'), but they seem much less likely.
So in this scenario you should update against Z.
It turns out that this part is not necessary. Almost all of the evidential weight comes from credibility of Y.
This calculation just used the fact that Y would have been able to give stronger testimony than X, and that lawyers have incentives to present a strong case for their client where possible. In this scenario, the fact that Y was not called is evidence that Y's testimony would have weakened the case for Z.
The actual objection against hearsay has nothing to do with this calculation at all, as I mentioned in my comment.
You can apply it in ordinary conversation too (to the extent that you apply Bayesian updates in ordinary conversation at all). It's just that the update is stronger when the equivalent of E|XYZ is more unlikely, and in ordinary conversation it may not be very unlikely resulting in a weaker update.