Research coordinator of Stop/Pause area at AI Safety Camp.
See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Apr: Californian civil society nonprofits
This petition has the most rigorous legal arguments in my opinion.
Others I know also back a block (#JusticeForSuchir, Ed Zitron, Stop AI, creatives for copyright). What’s cool is how diverse the backers are, from skeptics to doomers, and from tech whistleblowers to creatives.
It's about world size, not computation, and has a startling effect that probably won't occur again with future chips
Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better.
Thanks, I might be underestimating the impact of new Blackwell chips with improved computation.
I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well.
And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other concerned communities to restrict the development and release of dangerously unscoped models.
But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away
What is this revenue estimate assuming?
If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this?
What's relevant for me is that there is an AI market crash, such that AI corporations have weakened and we in turn have more leeway to restrict their reckless activities. Practically, I don't mind if that's actually the result of a wider failing economy – I mentioned a US recession as a causal factor here.
Having said that, it would be easier to restrict AI corp activities when there is not a general market crash at the same time (since the latter would make it harder to fund organisers as well as for working citizens to mobilise).
PS: I don't exactly have $25k to bet, and I've said elsewhere I do believe there's a big chance that AI spending will decrease.
Understood! And I appreciate you discussing thoughts with me here.
Another thought is that changes in the amount of investment may swing further than changes in the value...?
Interesting point! That feels right, but I lack experience/clarity about how investments work here.
That's a good distinction.
I want to take you up on measuring actual inflows of capital into the large-AI-model development companies. Rather than e.g. measuring the prices of stocks in companies leading on development – where declines may not much reflect an actual reduction in investment and spending on AI products.
Consumers and enterprises cutting back on their subscriptions and private investors cutting back on their investment offers and/or cancelling previous offers – those seem reliable indicators of an actual crash.
It's plausible that a general market crash feeds into, and is reflective of, worsening economics of the AI companies. So it seems hard to decouple causation there. And, I'd still call it an AI market crash even if investment/valuations/investments are going down to a similar extent in other industries. So I would not try to control for other market declines happening around the same time, but your suggested indicators make sense!
For sure! Proceeds go to organisers who can act to legitimately restrict the weakened AI companies.
(Note that with a crash I don’t just mean some large reduction in the stock prices of tech companies that have been ‘leading’ on AI. I mean a broad-based reduction in the investments and/or customer spending going into the AI companies.)
Maybe I'm banking too much on some people in the AI Safety community keep thinking that AI "progress" will continue as a rapid upward curve :)
Elsewhere I posted a guess of 40% chance of an AI market crash for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash".
Nice! Cool to see that turned into a prediction market.
~
BTW, I adjusted my guesstimate of winning down to a quarter.
I now guess it's more like 1/8 chance (meaning that from my perspective Marcus will win this bet on expectation). It is pretty hard to imagine so many paying customers going away, particularly as revenues have been growing in the last year.
Marcus has thought this one through carefully, and I'm naturally sticking to the commitment. If we end up seeing a crash down the line, I invite all of you to consider with me how to make maximum use of that opportunity.
I still think a crash is fairly likely, but also that if there is a large slump in investment across the industry that most customers could end up continuing to pay for subscriptions.
The main problem I see is that OpenAI and Anthropic are losing money on products they are selling, which are facing commodification (i.e. downward pressure on prices). But unless investments run dry soon, they can continue for some years and eventually find ways to lock in customers (e.g. through personalisation) and monetisation (e.g. personalised ads).