In a recent edition of The Diff ($), Byrne Hobart pointed out a talk given by Robin Hanson at Manifest. The talk is well worth listening to. For example, Robin says:
I've always thought [Prediction Markets] was the best idea I've had in my life [...] My biggest contribution
The general thrust of the talk is...
Our world is full of big organizations that just make a lot of bad decisions because they find it hard to aggregate information [...], so prediction markets are a proven method for doing exactly that
... but that it's work to apply prediction markets.
Paraphrasing an entire section: You need to prove a technology works to get it accepted. It's insufficient to supply the technology. Robin gives the analogy of the motor - a motor on its own has no value. A motor hooked up to a pump in a coal mine… that's another story. In his analogy, the prediction market is the motor, and someone needs to find the "pump in the coal mine" to hook it up to.
What are the [...] high initial value things? The thing that most often comes to my mind is new hires.
Here is where I start to disagree with Robin.
The first question is:
- Are decisions made during hiring poor?
My intuition here is "actually fairly good." Firms typically spend a decent amount on hiring processes - they run screening tests, conduct interviews, look at CVs, and ask for references. It's fair to say that companies have a reasonable amount of data collected when they make hiring decisions, and generally, the people involved are incentivized to hire well.
The second question is:
- Would a prediction market work well?
If I look at the tests I mentioned, this prediction market is unappealing. We'd expect no cross-subsidies, mis-weighted demand, and noise traders (other than most participants won't be very good). There's little reason for the information to be dispersed - the company currently asks and gets the data.
There are further issues - the individual traders are unlikely to make lots of trades, so the mechanism by which better traders have more capital and make the market more efficient is absent.
Every part of this is false. Companies don't collect a fair amount of data during the hiring process, and the data they do collect is often irrelevant or biased. How much do you really learn about a candidate by having them demonstrate whether they've managed to memorize the tricks to solving programming puzzles on a whiteboard?
The people involved are not incentivized to hire well, either. They're often engineers or managers dragged away from the tasks that they are incentivized to perform in order to check a box that the participated in the minimum number of interviews necessary to not get in trouble with their managers. If they take hiring seriously, it's out of an altruistic motivation, not because it benefits their own career.
Furthermore, no company actually goes back and determines whether its hires worked out. If a new hire doesn't work out, and is let go after a year's time, does anyone actually go back through their hiring packet and determine if there were any red flags that were missed? No, of course not. And yet, I would argue that that is the minimum necessary to ensure improvement in hiring practices.
The point of a prediction market in hiring is to enforce that last practice. The existence of fixed term contracts with definite criteria and payouts for those criteria forces people to go back and look at their interview feedback and ask themselves, "Was I actually correct in my decision that this person would or would not be a good fit at this company?"
That's what Triplebyte was trying to do for programming jobs. It didn't seem to work out very well for them. Last I heard, they'd been acquired by Karat after running out of funding.