I thought about this a bit more, and I'm worried that this is going to be a long-running problem for the reliability of prediction markets for low-probability events.
Most of the problems we currently observe seem like "teething issues" that can be solved with higher liquidity, lower transaction costs, and better design (for example, by having bets denominated in S&P 500 or other stock portfolios rather than $s). But if you should understand "yes" predictions for many of those markets as an implicit bet on differing variances of time value of money in the future, it might be hard to construct a good design that gets around these issues to allow the markets to reflect true probabilities, especially for low-probability events.
(I'm optimistic that it's possible, unlike some other issues, but this one seems thornier than most).
I agree that Tracy does this at a level sufficient to count as "actually care about meritocracy" from my perspective. I would also consider Lee Kuan Yew to actually care a lot about meritocracy, for a more mainstream example.
You could apply it to all endeavours, and conclude that "very few people are serious about <anything>"
Yeah it's a matter of degree not kind. But I do think many human endeavors pass my bar. I'm not saying people should devote 100% of their efforts to doing the optimal thing. 1-5% done non-optimally seems enough for me, and many people go about that for other activities.
For example, many people care about making (risk-adjusted) returns on their money, and take significant steps towards doing so. For a less facetious example, I think global poverty EAs who earn-to-give or work to make mobile money more accessible count as "actually caring about poverty."
Similarly, many people say they care about climate change. What do you expect people to do if they care a lot about climate change? Maybe something like
We basically see all of these in practice, in significant numbers. Sure, most people who say they care about climate change don't do any of the above (and (4) is rare, relatively speaking). But the ratio isn't nearly as dismal as a complete skeptic about human nature would indicate.
I thought about this for more than 10 minutes, though on a micro rather than macro level (scoped as "how can more competent people work on X" or "how can you hire talented people"). But yeah more like days rather than years.
Makes sense! I agree that this is a valuable place to look. Though I am thinking about tests/assessments in a broader way than you're framing it here. Eg things that go into this meta-analysis, and improvements/refinements/new ideas, and not just narrow psychometric evaluations.
Without assigning my own normative judgment, isn't this just standard trader behavior/professional ethics? It seems simple enough to justify thus:
Two parties want to make a bet (trade). I create a platform to facilitate such a bet (trade). Both parties are better off by their own lights after such a trade. I helped them do something that makes them each happier, and make a healthy profit doing so. As long as I'm not doing something otherwise underhanded/unethical, what's the problem here?
I don't think it's conceptually any different from e.g. offering memecoins on your crypto exchange, or (an atheist) selling religious texts on Amazon.
Shower thought I had a while ago:
Everybody loves a meritocracy until people realize that they're the ones without merit. I mean you never hear someone say things like:
I think America should be a meritocracy. Ruled by skill rather than personal characteristics or family connections. I mean, I love my son, and he has a great personality. But let's be real: If we live in a meritocracy he'd be stuck in entry-level.
(I framed the hypothetical this way because I want to exclude senior people very secure in their position who are performatively pushing for meritocracy by saying poor kids are excluded from corporate law or whatever).
In my opinion, if you are serious about meritocracy, you figure out and promote objective tests of competency that a) has high test-retest reliability so you know it's measuring something real, b) has high predictive validity for the outcome you are interested in getting, and c) has reasonably high accessibility so you know you're drawing from a wide pool of talent.
For the selection of government officials, the classic Chinese imperial service exam has high (a), low (b), medium (c). For selecting good actors, "Whether your parents are good actors" has maximally high (a), medium-high (b), very low (c). "Whether your startup exited successfully" has low (a), medium-high (b), low (c). The SATs have high (a), medium-low (b), very high (c).
If you're trying to make society more meritocratic, your number 1 priority should be the design and validation of tests of skill that push the Pareto frontier for various aspects of society, and your number 2 priority should be trying to push for greater incorporation of such tests.
Given that ~ no one really does this, I conclude that very few people are serious about moving towards a meritocracy.
(X-posted)^2
I agree being high-integrity and not lying is a good strategy in many real-world dealings. It's also better for your soul. However I will not frame it as "being a bad liar" so much as "being honest." Being high-integrity is often valuable, and ofc you accrue more benefits from actually being high-integrity when you're also known as high-integrity. But these benefits mostly come from actually not lying, rather than lying and being bad at it.
I've enjoyed playing social deduction games (mafia, werewolf, among us, avalon, blood on the clock tower, etc) for most of my adult life. I've become decent but never great at any of them. A couple of years ago, I wrote some comments on what I thought the biggest similarities and differences between social deduction games and incidences of deception in real life is. But recently, I decided that what I wrote before aren't that important relative to what I now think of as the biggest difference:
> If you are known as a good liar, is it generally advantageous or disadvantageous for you?
In social deduction games, the answer is almost always "no." Being a good liar is often advantageous, but if you are known as a good liar, this is almost always bad for you. People (rightfully) don't trust what you say, you're seen as an unreliable ally, etc. In games with more than two sides (e.g. Diplomacy), being a good liar is seen as a structural advantage for you, so other people are more likely to gang up on you early.
Put another way, if you have the choice of being a good liar and being seen as a great liar, or being a great liar and seen as a good liar, it's almost always advantageous for you to be the latter. Indeed, in many games it's actually better to be a good liar who's seen as a bad liar, than to be a great liar who's seen as a great liar.
In real life, the answer is much more mixed. Sometimes, part of being a good liar means never seeming like a good liar ("the best salesmen never makes you feel like they're a salesman").
But frequently, being seen as a good liar is an asset than a liability. Thinking of people like Musk and Altman here, and also the more mundane examples of sociopaths and con men ("he's a bastard, but he's our bastard"). It's often more advantageous to be seen as a good liar, than to actually be a good liar.
This is (partially) because real life has many more repeated games of coordination, and people want allies (and don't want enemies) who are capable. In comparison, individual board games are much more isolated and people are objectively more similar playing fields.
Generalizing further from direct deception, a history blog post once posed the following question:
Q: Is it better to have a mediocre army and a great reputation for fighting, or a great army with a mediocre reputation?
Answer: The former is better, pretty much every time.
I think in an ideal world we'd have prediction markets structured around several different levels of investment risk, so that people with different levels of investment risk tolerance can make bets (and we might also observe fascinating differences if the odds diverge, eg if AGI probabilities are massively different between S&P 500 bets and T-bills bets, for example).