I've been trying to make sense of what a 50% prediction means. Does it convey different meanings based on the question asked, and is it really as uninformative as it's made out to be. As a part of that discussion, I believe there is a possible solution to two big problems prediction markets currently face.
- How to extract useful signal from 50% predictions
- How to incentivize those with low certainty to participate
Predictions tend to be comprised of two steps:
- What do I believe is going to happen?
- How certain am I?
There are many cases where even if one has a sense of what is going to happen, there is little to no certainty that it will. The options here are skipping the question (not placing a bet), or putting in 50% if you're forced to choose.
I think it's generally understood that using 50% to represent "I don't know" is problematic. Picking 50% also holds an entirely different meaning for the question "Will this coin toss result in heads?" vs. "Will China invade Taiwan this year?".
Similarly, as an observer of the market, does seeing the market at 50% really represent 1:1 odds? Or is it an indication that the market is extremely uncertain. There seems to be a useful signal missing here.
Finally, given that some of the most interesting questions in prediction markets are also about the most uncertain events, many people avoid participating or betting altogether. This is the opposite of what we want if these markets are to be useful.
A Possible Solution
There are two types of uncertainty, or "I don't know":
- I don't know enough about this, so I won't bet
- I've studied and researched this subject, and I still don't know, so I won't bet
Incentivizing the the first type to place bets would add noisy signal. But capturing the second type of "I don't know" seems to be a pretty important signal about the studied uncertainty of the event.
There have been some ideas about how to encourage more participation, like by providing interest free loans as incentive. But I think that suffers from a few of problems. First, it doesn't distinguish between noisy vs. studied signals. Second, it incentivizes picking the option with more upside. And third, it encourages exactly the most horrible and awful kind of punditry: Picking an opinion with certainty even though you have no idea.
So, is there a way to capture studied uncertainty as useful signal, AND incentivize the participation of those who are highly uncertain?
I think this could be accomplished with an Uncertainty Index coupled with every question, on which one can place bets. The index moves up and down based on what percentage of people/money interacting with the question place a bet on the price/outcome vs the Uncertainty Index itself.
If more people are placing bets on uncertainty than on making a prediction, I make money.
In some ways, it would represent a kind of volatility index, but not exactly. As someone only cursorily familiar with derivitives, this seems like it would only partially be tied to the existing price, and the direction the price takes. And to whatever extent it is tied, it would present a way to hedge bets on the original question.
There is at least some evidence that this would work based on few of the meta questions on Manifold. Here is an example related to Russia-Ukraine:
- Will Russia invade Ukraine before the end of February?
- Will there be an edge case where it is hard to determine if Russia has invaded the Ukraine before March?
To implement this, Manifold could make their options 'YES', 'NO', 'UNCERTAIN' which make it quite intuitive to place the different types of bets.
The ability to place bets on an Uncertainty Index, or something similar that captures the core concept behind it, has the potential to encourage a lot more participation while also capturing an important signal of predictions: doubt that a meaningful prediction is even possible.
What is being lost is related to your intuition in the earlier comment:
Without knowing how many people of the "I've studied this subject, and still don't think a reasonable prediction is possible" variety didn't participate in the market, it's very hard to place any trust in it being the "right" price.
This is similar to the "pundit" problem where you are only hearing from the most opinionated people. If 60 nutritionist are on TV and writing papers saying eating fats is bad, you may try to draw the "wrong" conclusion from that.; because unknown to you, 40 nutritionists believe "we just don't know yet". And these 40 are provided no incentives to say so.
Take the Russia-Kiev question on Metaculus which had a large number of participants. It hovered at 8% for a long time. If prediction markets are to be useful beyond just pure speculation, that market didn't tell me how many knowledgable people thought an opinion was simply not possible.
The ontological skepticism signal is missing - people saying there is no right or wrong that "exists" - we just don't know. So be skeptical of what this market says.
As for KBC - most markets allow you to change/sell your bet before the event happens; especially for longer-term events. So my guess is that this is already happening. In fact, the uncertainty index would seperate out much of the "What do other people think?" element into it's own question.
For locked in markets like ACX where the suggestion is to leave your prediction blank if you don't know, imagine every question being paired with "What percentage of people will leave this prediction blank?"