Thanks, I might be underestimating the impact of new Blackwell chips with improved computation.
I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well.
And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other conc...
This is a neat and specific explanation of how I approached it. I tried to be transparent about it though.
If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this?
What's relevant for me is that there is an AI market crash, such that AI corporations have weakened and we in turn have more leeway to restrict their reckless activities. Practically, I don't mind if that's actually the result of a wider failing economy – I mentioned a US recession as a causal factor here.
Having said that, it would be easier to restrict AI corp activities when there is not a general market crash at the same ...
That's a good distinction.
I want to take you up on measuring actual inflows of capital into the large-AI-model development companies. Rather than e.g. measuring the prices of stocks in companies leading on development – where declines may not much reflect an actual reduction in investment and spending on AI products.
Consumers and enterprises cutting back on their subscriptions and private investors cutting back on their investment offers and/or cancelling previous offers – those seem reliable indicators of an actual crash.
It's plausible that a g...
For sure! Proceeds go to organisers who can act to legitimately restrict the weakened AI companies.
(Note that with a crash I don’t just mean some large reduction in the stock prices of tech companies that have been ‘leading’ on AI. I mean a broad-based reduction in the investments and/or customer spending going into the AI companies.)
Maybe I'm banking too much on some people in the AI Safety community keep thinking that AI "progress" will continue as a rapid upward curve :)
Elsewhere I posted a guess of 40% chance of an AI market crash for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash".
Thanks, I hadn't seen that graph yet! I had only searched Manifold.
The odds of 1:7 imply a 12.5% chance of a crash. That's far outside of the consensus on that graph. Though I also notice that their criteria for a "bust or winter" are much stricter than where I'd set the threshold for a crash.
That makes me wonder whether I should have selected a lower odd ratio (for a higher return on the upside). Regardless, this month I'm prepared to take this bet.
but calling this "near free money" when you have to put up 25k to get it...
Fair enough – you'd have to...
I think the US is in a recession now, and that the AI market has a ~40% chance of crashing with it this year.
This is a solid point that I forgot to take into account here.
What happens to GPU clusters inside the data centers build out before the market crash?
If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last.
I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transforme...
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
Yes, I get you don’t just want to read about the problem but a potential solution.
The next post in this sequence will summarise the plan by those experienced organisers.
These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement.
So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That...
Thanks for your takes! Some thoughts on your points:
Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.
I can't speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.
Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.
but I'm not sure where that money goes.
The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.
But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue.
I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles ...
Yes, I was also wondering what ordering it by jurisdiction contributed. I guess it's nice for some folks to have it be more visual, even if the visual aspects don't contribute much?
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Update: back up to 60% chance.
I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).
The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.
A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Update: back up to 50% chance.
Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product.
Update: 40% chance.
I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this.
I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago.
Of the recent wave of AI companies, the earliest one, DeepMind, relied on the Rationalists for its early funding. The first investor, Peter Thiel, was a donor to Eliezer Yudkowsky’s Singularity Institute for Artificial Intelligence (SIAI, but now MIRI, the Machine Intelligence Research Institute) who met DeepMind’s founder at an SIAI event. Jaan Tallinn, the most important Rationalist donor, was also a critical early investor…
…In 2017, the Open Philanthropy Project directed $30 million to OpenAI…
Good overview of how through AI Safety funders ended up...
It's because you keep making incomprehensible arguments that don't make any sense
Good to know that this is why you think AI Safety Camp is not worth funding.
Once a core part of the AGI non-safety argument is put into maths to be comprehensible for people in your circle, it’d be interesting to see how you respond then.
Lucius, the text exchanges I remember us having during AISC6 was about the question whether 'ASI' could control comprehensively for evolutionary pressures it would be subjected to. You and I were commenting on a GDoc with Forrest. I was taking your counterarguments against his arguments seriously – continuing to investigate those counterarguments after you had bowed out.
You held the notion that ASI would be so powerful that it could control for any of its downstream effects that evolution could select for. This is a common opinion held in the community. Bu...
I agree that Remmelt seems kind of like he has gone off the deep end
Could you be specific here?
You are sharing a negative impression ("gone off the deep end"), but not what it is based on. This puts me and others in a position of not knowing whether you are e.g. reacting with a quick broad strokes impression, and/or pointing to specific instances of dialogue that I handled poorly and could improve on, and/or revealing a fundamental disagreement between us.
For example, is it because on Twitter I spoke up against generative AI models that harm communi...
I think many people have given you feedback. It is definitely not because of "strategic messaging". It's because you keep making incomprehensible arguments that don't make any sense and then get triggered when anyone tries to explain why they don't make sense, while making statements that are wrong with great confidence.
As is, this is dissatisfying. On this forum, I'd hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.
People have already spent many hours givin...
For example, it might be the case that, for some reason, alignment would only have been solved if and only if Abraham Lincoln wasn't assassinated in 1865. That means that humans in 2024 in our world (where Lincoln was assasinated in 1865) will not be able to solve alignment, despite it being solvable in principle.
With this example, you might still assert that "possible worlds" are world states reachable through physics from past states of the world. Ie. you could still assert that alignment possibility is path-dependent from historical world states.
But you...
Thanks!
With ‘possible worlds’, do you mean ‘possible to be reached from our current world state’?
And what do you mean with ‘alignment’? I know that can sound like an unnecessary question. But if it’s not specified, how can people soundly assess whether it is technically solvable?
Thanks, when you say “in the space of possible mathematical things”, do you mean “hypothetically possible in physics” or “possible in the physical world we live in”?
Here's how I specify terms in the claim:
Good to know. I also quoted your more detailed remark on AI Standards Lab at the top of this post.
I have made so many connections that have been instrumental to my research.
I didn't know this yet, and glad to hear! Thank you for the kind words, Nell.
Fair question. You can assume it is AoE.
Research leads are not going to be too picky in terms of what hour you send the application in,
There is no need to worry about the exact deadline. Even if you send in your application on the next day, that probably won't significantly impact your chances of getting picked up by your desired project(s).
Sooner is better, since many research leads will begin composing their teams after the 17th, but there is no hard cut-off point.
Thanks! These are thoughtful points. See some clarifications below:
AGI could be very catastrophic even when it stops existing a year later.
You're right. I'm not even covering all the other bad stuff that could happen in the short-term, that we might still be able to prevent, like AGI triggering global nuclear war.
What I'm referring to is unpreventable convergence on extinction.
If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.
Agreed that could be a good outcome if it could be attainable.
In prac...
I'm also feeling less "optimistic" about an AI crash given:
I will revise my previous forecast back to 80%+ chance.
Yes, I agree formalisation is needed. See comment by flandry39 in this thread on how one might go about doing so.
Worth considering is that there are actually two aspects that make it hard to define the term ‘alignment’ such to allow for sufficiently rigorous reasoning:
In my reply above, I did not help you much with (1.). Though even while still using the English lang...
For an overview of why such a guarantee would turn out impossible, suggest taking a look at Will Petillo's post Lenses of Control.
Defining alignment (sufficiently rigorous so that a formal proof of (im)possibility of alignment is conceivable) is a hard thing!
It's less hard than you think, if you use a minimal-threshold definition of alignment:
That "AGI" continuing to exist, in some modified form, does not result eventually in changes to world conditions/contexts that fall outside the ranges that existing humans could survive under.
Yes, I think there is a more general proof available. This proof form would combine limits to predictability and so on, with a lethal dynamic that falls outside those limits.
The question is more if it can ever be truly proved at all, or if it doesn't turn out to be an undecidable problem.
Control limits can show that it is an undecidable problem.
A limited scope of control can in turn be used to prove that a dynamic convergent on human-lethality is uncontrollable. That would be a basis for an impossibility proof by contradiction (cannot control AGI effects to stay in line with human safety).
Awesome directions. I want to bump this up.
This might include AGI predicting its own future behaviour, which is kind of essential for it to stick to a reliably aligned course of action.
There is a simple way of representing this problem that already shows the limitations.
Assume that AGI continues to learn new code from observations (inputs from the world) – since learning is what allows the AGI to stay autonomous and adaptable in acting across changing domains of the world.
Then in order for AGI code to be run to make predictions about relev...
Just found your insightful comment. I've been thinking about this for three years. Some thoughts expanding on your ideas:
my idea is more about whether alignment could require that the AGI is able to predict its own results and effects on the world (or the results and effects of other AGIs like it, as well as humans)...
In other words, alignment requires sufficient control. Specifically, it requires AGI to have a control system with enough capacity to detect, model, simulate, evaluate, and correct outside effects propagated by the AGI's own components....
No actually, assuming the machinery has a hard substrate and is self-maintaining is enough.
we could create aligned ASI by simulating the most intelligent and moral people
This is not an existence proof, because it does not take into account the difference in physical substrates.
Artificial General Intelligence would be artificial, by definition. In fact, what allows for the standardisation of hardware components is the fact that the (silicon) substrate is hard under human living temperatures and pressures. That allows for configurations to stay compartmentalised and stable.
Human “wetware” has a very different substrate. It’s a soup of bouncing org...
Just found a podcast on OpenAI’s bad financial situation.
It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).
https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/
Noticing no response here after we addressed superficial critiques and moved to discussing the actual argument.
For those few interested in questions raised above, Forrest wrote some responses: http://69.27.64.19/ai_alignment_1/d_241016_recap_gen.html
The claims made will feel unfamiliar and the reasoning paths too. I suggest (again) taking the time to consider what is meant. If a conclusion looks intuitively wrong from some AI Safety perspective, it may be valuable to explicitly consider the argumentation and premises behind that.
BTW if anyone does want to get into the argument, Will Petillo’s Lenses of Control post is a good entry point.
It’s concise and correct – a difficult combination to achieve here.
Resonating with you here! Yes, I think autonomous corporations (and other organisations) would result in society-wide extraction, destabilisation and totalitarianism.
Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.
Very much agreeing with this.
Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better.