My intuition here is “actually fairly good.” Firms typically spend a decent amount on hiring processes—they run screening tests, conduct interviews, look at CVs, and ask for references. It’s fair to say that companies have a reasonable amount of data collected when they make hiring decisions, and generally, the people involved are incentivized to hire well.
Every part of this is false. Companies don't collect a fair amount of data during the hiring process, and the data they do collect is often irrelevant or biased. How much do you really learn about a candidate by having them demonstrate whether they've managed to memorize the tricks to solving programming puzzles on a whiteboard?
The people involved are not incentivized to hire well, either. They're often engineers or managers dragged away from the tasks that they are incentivized to perform in order to check a box that the participated in the minimum number of interviews necessary to not get in trouble with their managers. If they take hiring seriously, it's out of an altruistic motivation, not because it benefits their own career.
Furthermore, no company actually goes back and determines whether its hires worked out. If a new hire doesn't work out, and is let go after a year's time, does anyone actually go back through their hiring packet and determine if there were any red flags that were missed? No, of course not. And yet, I would argue that that is the minimum necessary to ensure improvement in hiring practices.
The point of a prediction market in hiring is to enforce that last practice. The existence of fixed term contracts with definite criteria and payouts for those criteria forces people to go back and look at their interview feedback and ask themselves, "Was I actually correct in my decision that this person would or would not be a good fit at this company?"
On top of all that, this whole process is totally unaccountable and for some market failure reason, every company repeats it.
Unaccountable : the reason a candidate wasn't hired isn't disclosed, which means in many cases it may be a factually false reason, illegal discrimination, the job wasn't real, or immigration fraud. Or just "they failed to get lucky on an arbitrary test that measures nothing".
Repeats it: So each company wastes at least a full day of each candidates time, and for each candidate they consider, they waste more than that of their own time, plus plane tickets and other expenses. And then to even be noticed by a company it's apparently considered the responsibility of the candidate to spam every company in their country with a copy of their resume.
Why isn't there a standardized test given by a third party for job relevant skills? Why isn't there a central database where all the candidates put their resumes, and it won't make any difference to the chance of being hired if a particular candidate spams everyone. (to an extent, having a linkedin profile and just waiting for a recruiter to ping you is this)
I know it's the result of some series of competing incentives that result in this market failure, aka moloch, but still.
Why isn’t there a standardized test given by a third party for job relevant skills?
That's what Triplebyte was trying to do for programming jobs. It didn't seem to work out very well for them. Last I heard, they'd been acquired by Karat after running out of funding.
I'm a big fan of Robin, but I don't give much weight to his diagnosis of problematic group decision mechanisms. Prediction markets are truly an impressive idea, and there may be cases where it really does aggregate and weight opinions more accurately (in the cases where that is the primary goal) than existing mechanisms.
For employment decisions, it's not clear that there is usable (legally and socially tolerated) information which a market can provide. Even worse for employment-continuation decisions (a market of coworkers voting on who they compete/cooperate with next cycle) than for initial-employment decisions, but both are game-able in ways that aren't going to be very good PR for the company.
I don't give much weight to his diagnosis of problematic group decision mechanisms
I have quite a lot of time for it personally.
The world is dominated by a lot of large organizations that have a lot of dysfunction. Anybody over the age of 40 will just agree with me on this. I think it's pretty hard to find anybody who would disagree about that who's been around the world. Our world is full of big organizations that just make a lot of bad decisions because they find it hard to aggregate information from all the different people.
This is roughly Hanson's reasoning, and you can spell out the details a bit more. (Poor communication between high level decision makers and shop-floor workers, incentives at all levels dissuading truth telling etc). Fundamentally though I find it hard to make a case this isn't true in /any/ large organization. Maybe the big tech companies can make a case for this, but I doubt it. Office politics and self-interest are powerful forces.
For employment decisions, it's not clear that there is usable (legally and socially tolerated) information which a market can provide
I roughly agree - this is the point I was trying to make. All the information is already there in interview evaluations. I don't think Robin is expecting new information though - he's expecting to combine the information more effectively. I just don't expect that to make much difference in this case.
big organizations that just make a lot of bad decisions because they find it hard to aggregate information from all the different people.
I don't disagree with the first part, but the "because" clause is somewhere between over-simple and just plain wrong. The dysfunction in large organizations (corporations and governments as primary examples) is analogous to dysfunction in individual humans, which is ALSO rampant, and seems to be more about misalignment of components than about single-powerful-executive information and decision-making.
In a recent edition of The Diff ($), Byrne Hobart pointed out a talk given by Robin Hanson at Manifest. The talk is well worth listening to. For example, Robin says:
The general thrust of the talk is...
... but that it's work to apply prediction markets.
Paraphrasing an entire section: You need to prove a technology works to get it accepted. It's insufficient to supply the technology. Robin gives the analogy of the motor - a motor on its own has no value. A motor hooked up to a pump in a coal mine… that's another story. In his analogy, the prediction market is the motor, and someone needs to find the "pump in the coal mine" to hook it up to.
Here is where I start to disagree with Robin.
The first question is:
My intuition here is "actually fairly good." Firms typically spend a decent amount on hiring processes - they run screening tests, conduct interviews, look at CVs, and ask for references. It's fair to say that companies have a reasonable amount of data collected when they make hiring decisions, and generally, the people involved are incentivized to hire well.
The second question is:
If I look at the tests I mentioned, this prediction market is unappealing. We'd expect no cross-subsidies, mis-weighted demand, and noise traders (other than most participants won't be very good). There's little reason for the information to be dispersed - the company currently asks and gets the data.
There are further issues - the individual traders are unlikely to make lots of trades, so the mechanism by which better traders have more capital and make the market more efficient is absent.