On Manifold, you'll see lots of markets like "Will AI wipe out humanity by 2040?". These prices aren't very informative, and they trade more like opinion polls. The anthropic bias ruins it, because YES holders can't win mana without being dead. Though even for NO holders, the timeframe and opportunity cost make it unappealing (we can get higher annualized returns elsewhere). I can't do much about that obstacle yet.
But I wanted to try capturing AI risk from a more falsifiable angle.
Other disasters share a familiar power law curve. Take for example the worst novel plagues, which are disproportionately worse than mid-sized ones, and so on. We see this in lots of domains, happens in financial disasters too. Also true for deaths via human conflicts.
Though "power law curve" doesn't necessarily mean "small numbers". And it's plausible it could be "spikier" than a vanilla power law curve, as I suspect things start happening much faster. But whatever your prior is for AI death events, I suggest some sort of power law curve.
Some people say the upcoming AI transformations will be so violent, that this "curve" is extremely spikey, almost approaching being binary. So maybe a few people die in self-driving cars, maybe a bunch of people get hacked and lose money. And then in a few short years, it skips to "8 billion+ fall over dead during the same second, in a coordinated global nanotech strike."
Do they have enough information to justify updating so far away from the prior? That doesn't seem credible. I think their pre-paradigmatic thought experiments and fuzzy terms are much weaker than some of them assert.
I've heard Tyler Cowan say he doesn't find doomers sincere. He claims they ought to take out debt, etc. I think I might've even once heard him say credit card debt [pending enough attention to find the clip].
This seems false. In general, the existing instruments are very contaminated "measures" of AI risk. Maybe there's an argument for holding semiconductors (I think some of them do). But taking out massive debt based on this belief is generally ill-advised, even for short-timeliners. This sounds Kelly-suboptimal, and is long-term very disadvantaged whenever the position isn't removed by observer selection effects. [1]
Ideally, there would be efficient markets on this. In the ideal world, the slope of AI death would be kind of like the bond market yield curve, but for AI death. And we wouldn't say to people "See? Bond market says no AI death anytime soon". We'd have instruments directly about the thing we mean. But people more influential than me would rather "wing it" on getting reliable estimates about lots of things, than to let compulsive people bankrupt themselves gambling.
Manifold it is.
I am offering markets to illustrate this curve. They have much less anthropic bias than extinction-based ones. The slope of the curve seems worth trying to understand, so here's my contribution.
There are gray areas I haven't disambiguated yet. E.g. Jonathan Ray asks:
If WIV uses AI to design its next bioweapon, and that starts another pandemic like covid-19 killing >10M people, does that count?
I'm afraid I don't have a fleshed-out taxonomy yet. But here I tried to show my current attitude, for future improvement.
Great question, I haven't actually decided this yet. Will partly depend on what we find out about the event. Some dimensions I'm considering:
- An increase in how easily we can point to a causal blunder by the human workers/managers -> a decrease in how much I'd count it as an AI disaster.
- A decrease in a lab's general track record of safety -> a decrease in how much I'd attribute it to the AI.
- An increase in how much we think the infectiousness and virulence are "explained by" the AI's contribution to its development -> an increase in counting it.
I know this doesn't answer the question, it's a hard one! Open to suggestions.
Some of these gray areas are just difficult to taxonomize. Still, I feel it is shameful for a market maker to have big ambiguities (I think synthetic plagues are a massive risk). For some context, my capacity to work on this stuff is limited and distracted. But I'll try to refine these over time, roughly along the lines of the above framework.
Trust me bro
not that realistically we had any control over SBF's actions or identity as an EA
Agree little could be done then. But since then, I've noticed the community has an attitude of "Well I'll just keep an eye out next time" or "I'll be less trusting next time" or something. This is inadequate, we can do better.
I'm offering decision markets that will make it harder for frauds to go unnoticed, prioritizing crypto (still experimenting with criteria). But when I show EAs these, I'm kind of stunned by the lack of interest. As if their personal judgment is supposed to be less-corruptible at detecting fraud, than a prediction market. This has been very alarming for me to see.
But who knows -- riffing off the post, maybe that just means prediction markets haven't built up enough reputation for LW/EA to trust it.
Part of your decision of "should I go on semaglutide/Wegovy/Ozempic?" would be influenced by whether it improves lifespan. Weight loss is generally good for that, but here's a decision market specifically about lifespan.
Since I've been getting downvoted for what have felt like genuine attempts to create less-corruptible information, please try to keep an open mind or explain why you downvote.
Short forms it is! Thank you.
Also, I probably assume more readers accept the framework than they do. That prediction markets are worth using, that a market's accuracy rises in a vaguely lognormal way with trading activity, and a random reader usually can't beat it unless the market has few traders. I could try including a link to Scott Alexander making similar points for me.
You're probably right that a more directly relevant criterion could be tried. So here is a prototype series, starting with the 3 biggest exchanges.
Ah, I didn't even notice that clickbait aspect of the title, I'm so used to thinking of "whistleblower markets" as a thing. I've edited the title to just say Manifold market.
Thank you for the response. I actually do have a whole series where some comparisons could be made between crypto enterprises. You're right that a single is less informative. In the future, I'll assume I probably won't have the energy to write up a detailed comparison, and just won't bother trying to communicate my markets on LessWrong. Not meant to sound bitter -- this is useful and will avoid wasting time. (EDIT: Unless there's some way to format low-quality attempts as prototypes for feedback, perhaps that could be desirable.)
Thank you for describing this. My reaction in point form:
-I can understand that general prior against financial advice. After FTX, I had assumed many people here would want to know about such a high risk with the largest crypto exchange. I can certainly skip posting about such risk outliers here in the future.
-I'm not sure "not scholarly enough" is generally that predictive. I've seen many posts with many upvotes that didn't seem very scholarly. I understand the site is trying to maintain more scholarly habits though. If a Manifold market doesn't tick the right boxes, then thank you for letting me know.
-On manipulation: such a market is probably harder to manipulate than alternative methods I could be using to examine this. What I like about my post is it's not me saying "I've talked to a few people, and here's my impression of the risks". It's using a novel new epistemic tool, one that I have reason to expect is better than whatever my judgment is. I suspect after FTX, everyone in EA is basically still using their own personal judgment about fraud risks, and I expect that will underperform prediction markets in the long-term. I can imagine people disagreeing with me about that, but I think it's very unfortunate. I still appreciate you letting me know.
I see the first 2 votes were downvotes. Consider me interested in understanding.
Wanted: has anyone on LessWrong said they moved money off an exchange, after seeing my markets? I made a meta-market asking if literally anyone would be influenced to move money off any exchange mentioned in any of my crypto-related markets.
If you've been influenced by any of my markets on the relative exchange risks, please let me know so I can reward predictors.