I don't think this is a useful model.
Expensive Shareable proof of some desirable quality
Your examples don't seem to actually follow this, as false certificates seem to be available. If you instead say "evidence of", this makes more sense, but is also way less surprising. Signaling is a competitive/adversarial game.
This is true for markets of goods and services, but false for markets of information and trust.
Huh? It's not very true for goods and services, and it's only a little more difficult for information or trust. It applies to all transactions (because all transactions are fundamentally about trust). There are many kinds of transactions, of course, for which we haven't evolved (or have actively prevented) strong signals in the form of binding contracts or clear expectations from being available.
negative externality of lowering the percieved value of all similar uncertified goods.
And this is where you lose me. Failure to add value is not an externality. Good competition (offering a more attractive transaction) is not a market failure.
If you instead say "evidence of", this makes more sense
Accepted and changed, I'm only claiming some information/entanglement, not absolute proof.
It applies to all transactions (because all transactions are fundamentally about trust)
Would it be clearer to say "markets with perfect information"? The problem I'm trying to describe can only occur with incomplete information / imperfect trust, but doesn't require so little information and trust that transactions become impossible in general. There's a wide middleground of imperfect trust where all of real life happens, and we still do business anyway.
And this is where you lose me. Failure to add value is not an externality. Good competition (offering a more attractive transaction) is not a market failure.
It sure looks like an externality when generally terrible things can happen as a result. I agree that being able to offer a better product is good, and being able to incentivise that is good if it can lead to more better products, but it does also have this side problem that can be harmful enough to be worth considering.
Signaling is a competitive/adversarial game.
Yeah, I know this idea isn't completely original / exists inside broader frameworks already, but I wanted to highlight it more specifically and I haven't found anything identical to this before. Thanks for the feedback.
Certificate: Expensive Shareable evidence of some desirable quality, typically of a marketable good. Increases the percieved value of that good to the market and so benefits the owner, but has a negative externality of lowering the percieved value of all similar uncertified goods.
Certificate Hell: The place Civilisation goes if it ignores this negative externality and spends all its money expensively proving how valuable all its goods are.
There's a common assumption in economics that it will be possible for the market to find some stable solution, containing a price for every single good in the market, that balances supply and demand and does not create opportunities for profit that are not already being fully exploited.
This is true for markets of goods and services, but false for markets of information and trust.
Unpriceable Information
Suppose there are two types of a widget: 50% are Good widgets worth $20, and 50% are Bad widgets worth $10. At first, there's no way to distinguish them, so the market prices them at just below $15 and the buyer eats the risk.
Now enters the Certificate Salesman, who sells certificates to sellers they can show to customers. they cost $1 for Good widgets, and $4 for Bad, so willingness to attach a certificate is information to a buyer about quality. If that model sounds crazy, imagine it represents the cost of proving something in a shareable way:
Because the market now has 2 observable distinct categories of widget it can buy, Certified or Uncertified, it can have 2 prices. Sellers of Good widgets buy certificates, and point out that Certified widgets are all Good, so the Certified price rises towards $20. At the same time the Uncertified price falls until the difference is $4, and sellers of Bad widgets start buying certificates as well. This pushes the Certified price back down, and continues until all widgets are Certified and the price returns to $15. At this point, if a buyer can offer to purchase randomly selected widgets for $14.50 without demanding a certificate, the seller will prefer this offer whether Good or Bad. This discontinuity teleports the Uncertified price back from $10 to around $15 and the cycle restarts.
Markets can't actually fail to produce prices for things. Instead they waste resources on transactional costs, lossy competition, and random chaos, until the post-chaos world contains only pricable goods again. Whenever the natural prices of the best certificate options land in the appropriate region of price-space (cheap and discerning enough to be useful, but not sufficiently costly-to-fake to defend the information-differences produced), then you get chaotic behaviour, and if nobody coordinates to stop it than you descend into outcomes terrible for everyone involved.
The fashion industry seems the clearest example of what happens when people are trying to signal things, but none of the available methods have enough cost-difference to produce an equilibria, so the whole system chaotically oscillates and you get mountains of worthless expired signalling tools that were once very high status. Romantic love seems the clearest example where people try to expensively prove that they have a quality, but everyone still realises it could be cheaply faked so none of the supposed certificates are held in high evidentiary regard. At least the market seems stable in that it doesn't produce new ways to prove love and then expire them every week.
Certificate Hell
Suppose the prices aren't naturally occuring, but are rather selected by someone with a monopoly on a source of private information. They realise that selling certificates to bad widgets undermines the value of the product, so they won't do that. Instead they offer them to good widgets exclusively, for an initially low price that later moves up to $5. The Certified Price becomes $20 (and good widget sellers take home $15), while the Uncertified Price becomes $10. The buyers and Good sellers are no better off, but the Bad sellers lose money, all of which goes to the certifier. If the sellers can't organise in a way to produce provably untested widgets with the prior 50/50 split and an expected value of $15, then the certifier monopolist can further raise the price to $9.99, securing the entire uncertainty as value for itself and having every seller leave with about the worst-case value of $10, without the buyers being any better off either.
Example: Trial Clothing
A man is accused of armed robbery. It is known that poor people do armed robberies more often than rich, but unfortunately the defense is not allowed to introduce their clients bank balance as evidence. The court does not however stop them from wearing an expensive suit, which the jury of ideal bayesians can't resist updating based on. Rich people typically wear nice clothes by default, so a defendant in jeans and a t-shirt is more likely to be guilty. Since the defendant knows this, they're incentivised to show up in the best suit they can afford. Since the jury knows that, they'll consider "The best suit a poor person can afford" to be proof of poverty, while "The best suit a rich person can afford" to be proof of wealth. Any mechanism to make suits easier for defendants to obtain (rentals, public assistance, etc.) will just move the standards up a bit when the jury adjusts their expectations accordingly.
If it's otherwise impossible to make a visibly expensive suit, fashion labels will put their logo on otherwise cheap clothing, publically commit never to sell anything below a certain price, and aggressively sue anyone making copies. Even with nobody explicitly scheming in any way, we have an outcome where everyone must spend the absolute most they can justify to prove a claim, or else be treated like the claim is false based on their unwillingness to spend more.
Example 2: Universities
In the distant before-time, a degree proved you were in the top 5% of education, while the absence of one didn't particularly prove the opposite. As educated people realised it was worth it, more signed up, so that the absence of a degree became evidence of the absence of education. This expands the difference in expected beliefs of others enough to more strongly justify getting a degree. As students respond to incentives, the degree providers notice they can make more profit by raising prices, lowering standards, and increasing admissions. In the future, given trends, 95% of people will be getting PhDs, it'll be interpretted as "proof you're not mentally disabled", and priced at slightly less than the value of not being assumed to be mentally disabled by everyone you meet.
Worst-Case Behaviour
In Certificate Hell, everyone except the single absolute-worst person in the world has a signed certificate saying they're not literally the worst, bought from the monopolist at slightly below the value to them of everyone they meet not assuming they're the worst person in the world. This extracts almost all social value contained in social trustworthiness, while adding almost zero social value in terms of aiding informed choices. Because the system requires trust in the certifier, and nobody can outcompete the one everybody uses, anyone presenting a non-monopoly certificate will still be assumed to be the worst, employing a clever scheme to pretend otherwise, so the monopoly never breaks.
Conclusions
This has seemed grim so far, so I should acknowledge that more information is generally an improvement. We're better off with qualified pilots over unqualified ones, since they crash planes less often, and we do need a mechanism to distinguish skill. We're better off with society possessing the information needed to form better plans, or to distinguish widget quality in situations where its decision-relevant. However, information that is valuable to participants of a market is not necessarily valuable to the community as a group. If it can only be used to more optimally calculate prices of trades that are still net-positive and will still be executed and will still have the same average-value either way, the community does not benefit from buying the price information on net, even though it is still worth paying for to an individual trader who can gain an advantage over the competition.
There is often an ambiguity where the same information is partially wanted for decision reasons, and partially wanted only for price reasons, and priced for an external reason, and it's not clear whether the price is worth it for the social benefit alone or only when including the price benefit to individual traders. I'm not exactly sure how to tell when a social policy of banning the certificate would be a net improvement or not, but it is clear that the price individuals are willing to pay for evidence that increases their percieved value massively overestimates the value to society of that same information.
I don't think humans enter this failure state that easily. At some point long before the full value represented in the uncertainty has been extracted, people start individually refusing to participate, or trying any cheaper mechanism to prove things they can, or just accept the uncertainty even though they'd personally be slightly better off demanding expensive proof, out of an impromptu coordination effort against the certificate merchants. I mostly want this article to exist as an explanation of why that sort of behaviour is justifiable in terms of theory, and that this is one of the corners where ideal bayesianism and sharp economic theory will produce bad outcomes if you don't smell the sulfur in advance and do something else instead. Doing something else will either look like saying "I'm going to ignore this evidence because it's costly and I don't want everyone in my community wasting their money buying it", which sounds like improper epistemology, or "I'm going to refuse to buy this evidence as a timeless trade between the versions of myself where the underlying fact was either true or false" which sounds like a conspiracy to decieve, but is necessary to avoid a worse equilibria.