Presumably the result of a coproof is a cotheorem. Which I mention only because, as everybody knows, a comathematician is a device for turning cotheorems into ffee.
A piece of writing advice: even if Too Like The Lightning gave you the idea, The questions the readers have are "what does this idea mean?", "what are some examples?", and "how can I use it?", not "where did the author come up with this idea?" Too Like the Lightning is not illuminating about any of the former questions (i.e. you don't use it as a source of vivid examples), but it takes up nearly half your post.
It's also a very short post. I think it is important to:
Beyond that, there's:
Of these, the first three seem worth including to me, and including the small spoiler warning forces the book-related stuff to be before the main point of the post.
I'm curious where you draw your writing knowledge from that seems to consider "source of inspiration" to be, at best, superfluous information? I can't say I've encountered such a guideline before. I suppose I could see an argument that such information doesn't belong in a particular type of writing (like formal writing or technical writing), but that would then require this piece to be the specified type of writing, which I anticipate it likely is not.
Personally, I enjoy hearing about people's sources of inspiration, because such a source might also be capable of providing inspiration to me. Thus, "Where did the author come up with this idea?" is certainly a question I could be said to have.
Given that, perhaps you are describing the questions you personally have, rather than those of all readers?
As suggested by @Mateusz Bagiński it is tempting to suggest that the proper reading of a coproof of should be a proof of the double negation of , i.e. is a proof .
The absence of evidence against the negation of is weaker than a proof however. The former allows for new evidence to still appear while the latter categorically rejects this possibility on pain of contradiction.
Coproofs as countermodels
How could absence of evidence be interpreted logically? One way is to interpret it as the provision of a countermodel. That is - a coproof of would be a model such that .
Currently, the countermodel prevents a proof of and the more countermodels we find the more we might hypothesize that in all models and therefore not . On the other hand, we may find new evidence in the future that expands our knowledge database and excludes the would-be countermodel . This would open the way of a proof of in our new context.
Coproofs as sets of countermodels
We can go further by defining some natural (probability) distribution on the space of models .
This is generically a tricky business but given a finite signature of propositional letters over a classical base logic the space of models is given by "ultrafilters/models/truth assignments" which assign or to the basic propsitional letters and are extended to compound propositions in the usual manner (i.e. etc).
A subset of models can now be interpreted as a coproof of if for all .
Probability distributions on propositions and distributions on models
We might want to generalize subsets of models to (generalized) distributions of models. Any distribution on the set of models now induces a distribution on the set of propositions.
In the simple case above we could also make an invocation of the principle of indifference to define a natural uniform distributions on . This would assigns a proposition the ratio .
Rk. Note that similar ideas appear in the van Horn-Cox theorem.
I don't like use of "proof" to talk about evidence, especially when it's often weak evidence. Why not go with "absence of counterevidence is counterevidence of absence"? Ok, confusing multiple negatives. "Absence of negative evidence makes positive evidence more plausable"? Not great either.
Maybe just forego the whole thing: "absence doesn't prove anything".
Two opposing attributes of data that cause some change in belief:
Positive Evidence / Lack of evidence
Probability-increasing / Probability-decreasing
Then cross the two to make:
Positive Evidence + Probability-increase = Proof
Lack of Evidence + Probability-increase = Co-proof
Positive Evidence + probability-decrease = Disproof/Counterproof (?)
Lack of Evidence + probability-decrease = Co-disproof? (Probably not)
?
Bayesians can do Popperian falsification -- to falsify a hypothesis , all you have to do is find a hypothesis such that
as discussed in the blog post Closed Worlds and Bayesian Inference.
Ah, yeah, that's a good way to do it. I like the post you linked.
I wasn't meaning to imply that Bayesians can't do falsification, only that Bayesians see it as a special case of a more general thing, and so may not be as excited about the shorthand "co-proof".
While I agree that "co-proofs" as you've described them are interesting, I'm not sure on how useful they are as a concept. While a lack of counter-evidence certainly helps when we want to argue in favor of a hypothesis, if there isn't enough evidence to bring that hypothesis to our attention in the first place, then we're privileging that hypothesis.
To speak to the example you give, while it is true that for any given person, not having an alibi is a co-proof of their involvement in a crime, there are likely vast numbers of people who don't have alibis, so absent additional proof that lets us pick from among those without alibis, the co-proof doesn't really get us anywhere by itself.
I agree that I'd rather not reason in the falsification way at all if I'm putting the effort in, as it can lead to privileging the hypothesis and to subtle forms of confirmation bias. Yet, I do find myself reasoning in the falsification way frequently, as a convenient approximation. So, there's a question: is it better to introduce mental shorthand which streamline falsification-style thinking, on the grounds that it seems frequently useful? Or, does that risk one falling into the failure modes associated with falsification-style reasoning more often? I'm not sure.
At the recommendation of Jacobian, I've been reading Too Like the Lightening. It is a thoughtful book which has several points of interest to rationalists (imho), but there is one concept which I think is nice enough to pluck out and discuss in itself, rather than being satisfied to suggest that people read the book. I also want to suggest a different name than the one from the book.
If you think discussion of a logical concept which is mentioned in a book is a spoiler, maybe stop here.
At one point, there is a discussion in which one character is explaining how much some other characters must already know. The term "anti-proof" is used to refer to failure to falsify a hypothesis. Having a short term for this concept seems like a really good idea. We have the phrase "absence of evidence is evidence of absence", but we don't have a word for the positive case, where absence of counter-evidence speaks in favor of a hypothesis.
Unfortunately, "anti-proof" sounds more like the former than the latter, even though it is being used for the latter in the book. A more appropriate term would be "co-proof", since it is the absence of a proof of the negation.
For example, an alibi would refute someone's involvement in a crime. The absence of an alibi, then, is a co-proof of their involvement: it does not prove involvement by any means, but it must constitute some supporting evidence, by conservation of expected evidence.
By "proof of H" I mean an observation which would make the probability of H very close to 1. (How close is "very close" depends on standards of proof in a context, with mathematics demanding the highest standards.) By "refutation" I mean a proof of the negation. So, a co-proof is an observation whose negation would have taken the probability of H to very near zero:
E is a co-proof of H := P(H|¬E)≈0
Why are co-proofs of interest? Popperian epistemology is the claim that scientific hypotheses can be supported only by co-proofs; we attempt to refute things, and if something has survived enough refutation attempts, it is considered to be a strong hypothesis. Bayesians are not Popperians, but Popper was still mostly right about this; so, having a short name for it seems useful.