All of Remmelt's Comments + Replies

Remmelt10

It's about world size, not computation, and has a startling effect that probably won't occur again with future chips

 

Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better. 

Thanks, I might be underestimating the impact of new Blackwell chips with improved computation. 

I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well. 

And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other conc... (read more)

6Vladimir_Nesov
It's about world size, not computation, and has a startling effect that probably won't occur again with future chips, since Blackwell sufficiently catches up to models at the current scale. The projection for 2025 is $12bn at 3x/year growth (1.1x per month, so $1.7bn per month at the end of 2025, $3bn per month in mid-2026), and my pessimistic timeline above assumes that this continues up to either end of 2025 or mid-2026 and then stops growing after the hypothetical "crash", which gives $20-36bn per year.

This is a neat and specific explanation of how I approached it. I tried to be transparent about it though.

If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this?

What's relevant for me is that there is an AI market crash, such that AI corporations have weakened and we in turn have more leeway to restrict their reckless activities. Practically, I don't mind if that's actually the result of a wider failing economy – I mentioned a US recession as a causal factor here.

Having said that, it would be easier to restrict AI corp activities when there is not a general market crash at the same ... (read more)

2Knight Lee
Oh yeah I forgot about that, the bet is about the strategic implications of an AI market crash, not proving your opinion on AI economics. Oops.

That's a good distinction.

I want to take you up on measuring actual inflows of capital into the large-AI-model development companies. Rather than e.g. measuring the prices of stocks in companies leading on development – where declines may not much reflect an actual reduction in investment and spending on AI products. 

Consumers and enterprises cutting back on their subscriptions and private investors cutting back on their investment offers and/or cancelling previous offers – those seem reliable indicators of an actual crash.

It's plausible that a g... (read more)

2Knight Lee
I disagree that it's hard to decouple causation: if the AI market and general market crashes by the same amount next year, I'll feel confident that it's the general market causing the AI market to crash, and not the other way around. Yearly AI spendings have been estimated to be at least 200 billion and maybe 600+ billion, but the world GDP is 100,000 billion (25,000 billion in the US). AI is still a very small player in the economy. (Even if you estimate it by expenditures rather than revenue) That said, if the AI market crashes much more than the general market, it could be the economics of AI causing them to crash, or it could be the general market slowing a little bit triggering AI to crash by a lot. But either way, you deserve to win the bet. If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this? * If AI crashes but the general market does not, you win money * If AI doesn't crash, you lose money * If both AI and the general market crashes, the bet resolves as N/A PS: I don't exactly have $25k to bet, and I've said elsewhere I do believe there's a big chance that AI spending will decrease. Edit: Another thought is that changes in the amount of investment may swing further than changes in the value...? I'm no economist but from my experience, when the value of housing goes down a little, housing sales drop by a ton. (This could be a bad analogy since homebuyers aren't all investors)[1] 1. ^ Though Google Deep Research agrees that this also occurs for AI companies

For sure!  Proceeds go to organisers who can act to legitimately restrict the weakened AI companies.

(Note that with a crash I don’t just mean some large reduction in the stock prices of tech companies that have been ‘leading’ on AI. I mean a broad-based reduction in the investments and/or customer spending going into the AI companies.)

Maybe I'm banking too much on some people in the AI Safety community keep thinking that AI "progress" will continue as a rapid upward curve :)

Elsewhere I posted a guess of 40% chance of an AI market crash for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash". 


 

2Knight Lee
Maybe you should try to define an AI market crash in such a way it's mostly limited to AI market crashes caused by the economics of AI (rather than a general market crash). E.g. compare the spending/valuations/investments in AI with spending/valuations/investments elsewhere.
5DAL
If you think there's a 40% chance of a crash, then that's quite the vig you're allocating yourself on this bet at 1:7.  

Thanks, I hadn't seen that graph yet! I had only searched Manifold.

The odds of 1:7 imply a 12.5% chance of a crash. That's far outside of the consensus on that graph. Though I also notice that their criteria for a "bust or winter" are much stricter than where I'd set the threshold for a crash.

That makes me wonder whether I should have selected a lower odd ratio (for a higher return on the upside). Regardless, this month I'm prepared to take this bet.

 

but calling this "near free money" when you have to put up 25k to get it...

Fair enough – you'd have to... (read more)

Remmelt1-1

I think the US is in a recession now, and that the AI market has a ~40% chance of crashing with it this year.

Remmelt32

This is a solid point that I forgot to take into account here. 

What happens to GPU clusters inside the data centers build out before the market crash? 

If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last. 

I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transforme... (read more)

3eggsyntax
VC money, in my experience, doesn't typically mean that the VC writes a check and then the startup has it to do with as they want; it's typically given out in chunks and often there are provisions for the VC to change their mind if they don't think it's going well. This may be different for loans, and it's possible that a sufficiently hot startup can get the money irrevocably; I don't know.
Remmelt20

Glad to read your thoughts!

Agreed on being friends with communities who are not happy about AI. 

I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.

Remmelt9-2

Yes, I get you don’t just want to read about the problem but a potential solution. 

The next post in this sequence will summarise the plan by those experienced organisers.

These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement. 

So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That... (read more)

Remmelt*100

Thanks for your takes!  Some thoughts on your points:

  • Yes, OpenAI has useful infrastructure and brands. It's hard to imagine a scenario where they wouldn't just downsize and/or be acquired by e.g. Microsoft.
  • If OpenAI or Anthropic goes down like that, I'd be surprised if some other AI companies don't go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out wit
... (read more)
2Knight Lee
:) thank you for saying thanks and replying. You're right, $600 billion/year sounds pretty unsustainable. That's like 60 OpenAI's, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they're willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now). It's also a good point how the failure might cascade. I'm reminded about people discussing whether something like the "dot-com bubble" will happen to AI, which I somehow didn't think of when writing my comment. Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don't look more stable than them. It's one possible future. I still think the possible future where this doesn't happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere. I agree that "AI Notkilleveryoneism" should be friends with these other communities who aren't happy about AI. I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Remmelt10

Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.

I can't speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.

Remmelt20

Yes, good point. There is a discussion of that here

Remmelt*30

Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.

 

but I'm not sure where that money goes.

The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.

 

But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue.

I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles ... (read more)

Remmelt10

Yes, I was also wondering what ordering it by jurisdiction contributed. I guess it's nice for some folks to have it be more visual, even if the visual aspects don't contribute much?

Remmelt*18-6

Update: back up to 70% chance.

Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.

My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

For:

  • Large model labs losing money
    • OpenAI made loss of ~$5 billio
... (read more)
Remmelt*20

Update: back up to 60% chance. 

I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).

The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.

A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.

Remmelt*18-6

Update: back up to 70% chance.

Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.

My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

For:

  • Large model labs losing money
    • OpenAI made loss of ~$5 billio
... (read more)
Remmelt1-1

Update: back up to 50% chance. 

Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product. 

2Remmelt
Update: back up to 60% chance.  I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%). The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry. A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.
Remmelt10

Update: 40% chance. 

I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this. 

I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago. 

1Remmelt
Update: back up to 50% chance.  Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product. 
Remmelt30

Of the recent wave of AI companies, the earliest one, DeepMind, relied on the Rationalists for its early funding. The first investor, Peter Thiel, was a donor to Eliezer Yudkowsky’s Singularity Institute for Artificial Intelligence (SIAI, but now MIRI, the Machine Intelligence Research Institute) who met DeepMind’s founder at an SIAI event. Jaan Tallinn, the most important Rationalist donor, was also a critical early investor…

…In 2017, the Open Philanthropy Project directed $30 million to OpenAI…

 

Good overview of how through AI Safety funders ended up... (read more)

Remmelt2-4

It's because you keep making incomprehensible arguments that don't make any sense

Good to know that this is why you think AI Safety Camp is not worth funding. 

Once a core part of the AGI non-safety argument is put into maths to be comprehensible for people in your circle, it’d be interesting to see how you respond then.

Remmelt1-1

Lucius, the text exchanges I remember us having during AISC6 was about the question whether 'ASI' could control comprehensively for evolutionary pressures it would be subjected to. You and I were commenting on a GDoc with Forrest. I was taking your counterarguments against his arguments seriously – continuing to investigate those counterarguments after you had bowed out.

You held the notion that ASI would be so powerful that it could control for any of its downstream effects that evolution could select for. This is a common opinion held in the community. Bu... (read more)

5Lucius Bushnaq
I think it is very fair that you are disappointed. But I don't think I can take it back. I probably wouldn’t have introduced the word crank myself here. But I do think there’s a sense in which Oliver’s use of it was accurate, if maybe needlessly harsh. It does vaguely point at the right sort of cluster in thing-space. It is true that we discussed this and you engaged with a lot of energy and in good faith. But I did not think Forrest’s arguments were convincing at all, and I couldn’t seem to manage to communicate to you why I thought that. Eventually, I felt like I wasn’t getting through to you, Quintin Pope also wasn’t getting through to you, and continuing started to feel draining and pointless to me. I emerged from this still liking you and respecting you, but thinking that you are wrong about this particular technical matter in a way that does seem like the kind of thing people imagine when they hear ‘crank’.
Remmelt*-10

I agree that Remmelt seems kind of like he has gone off the deep end


Could you be specific here?  

You are sharing a negative impression ("gone off the deep end"), but not what it is based on. This puts me and others in a position of not knowing whether you are e.g. reacting with a quick broad strokes impression, and/or pointing to specific instances of dialogue that I handled poorly and could improve on, and/or revealing a fundamental disagreement between us.

For example, is it because on Twitter I spoke up against generative AI models that harm communi... (read more)

habryka191

I think many people have given you feedback. It is definitely not because of "strategic messaging". It's because you keep making incomprehensible arguments that don't make any sense and then get triggered when anyone tries to explain why they don't make sense, while making statements that are wrong with great confidence.

As is, this is dissatisfying. On this forum, I'd hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.

People have already spent many hours givin... (read more)

Remmelt20

For example, it might be the case that, for some reason, alignment would only have been solved if and only if Abraham Lincoln wasn't assassinated in 1865. That means that humans in 2024 in our world (where Lincoln was assasinated in 1865) will not be able to solve alignment, despite it being solvable in principle.


With this example, you might still assert that "possible worlds" are world states reachable through physics from past states of the world. Ie. you could still assert that alignment possibility is path-dependent from historical world states.

But you... (read more)

1Satron
Yup, that's roughly what I meant. However, one caveat would be that I would change "physically possible" to "metaphysically/logically possible" because I don't know if worlds with different physics could exist, whereas I am pretty sure that worlds with different metaphysical/logical laws couldn't exist. By that, I mean stuff like the law of non-contradiction and "if a = b, then b = a." I think the main antidote against this is to ask the person you are speaking with to define the term if they are making claims in which equivocation is especially likely. Yeah, that's reasonable.
Remmelt10

Thanks!

With ‘possible worlds’, do you mean ‘possible to be reached from our current world state’?

And what do you mean with ‘alignment’? I know that can sound like an unnecessary question. But if it’s not specified, how can people soundly assess whether it is technically solvable?

4Satron
By "possible worlds," I mean all worlds that are consistent with laws of logic, such as the law of non-contradiction. For example, it might be the case that, for some reason, alignment would only have been solved if and only if Abraham Lincoln wasn't assassinated in 1865. That means that humans in 2024 in our world (where Lincoln was assasinated in 1865) will not be able to solve alignment, despite it being solvable in principle. My answer is kind of similar to @quila's. I think that he means roughly the same thing by "space of possible mathematical things." I don't think that my definition of alignment is particularly important here because I was mostly clarifying how I would interpret the sentence if a stranger said it. Alignment is a broad word, and I don't really have the authority to interpret stranger's words in a specific way without accidentally misrepresenting them. For example, one article managed to find six distinct interpretations of the word:
Remmelt10

Thanks, when you say “in the space of possible mathematical things”, do you mean “hypothetically possible in physics” or “possible in the physical world we live in”?

2[anonymous]
Possible to be ran on a computer in the actual physical world
Answer by Remmelt*30

Here's how I specify terms in the claim:

  • AGI is a set of artificial components, connected physically and/or by information signals over time, to in aggregate sense and act autonomously over many domains.
    • 'artificial' as configured out of a (hard) substrate that can be standardised to process inputs into outputs consistently (vs. what our organic parts can do).
    • 'autonomously' as continuing to operate without needing humans (or any other species that share a common ancestor with humans).
  • Alignment is at the minimum the control of the AGI's components (as modifie
... (read more)
Remmelt10

Good to know. I also quoted your more detailed remark on AI Standards Lab at the top of this post.

Remmelt10

I have made so many connections that have been instrumental to my research. 


I didn't know this yet, and glad to hear!  Thank you for the kind words, Nell.

Remmelt10

Fair question. You can assume it is AoE.

Research leads are not going to be too picky in terms of what hour you send the application in,

There is no need to worry about the exact deadline. Even if you send in your application on the next day, that probably won't significantly impact your chances of getting picked up by your desired project(s).

Sooner is better, since many research leads will begin composing their teams after the 17th, but there is no hard cut-off point.

Remmelt10

Thanks!  These are thoughtful points. See some clarifications below:
 

AGI could be very catastrophic even when it stops existing a year later.

You're right. I'm not even covering all the other bad stuff that could happen in the short-term, that we might still be able to prevent, like AGI triggering global nuclear war.

What I'm referring to is unpreventable convergence on extinction.
 

If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.

Agreed that could be a good outcome if it could be attainable.

In prac... (read more)

Remmelt10

Update: reverting my forecast back to 80% chance likelihood for these reasons.

1Remmelt
Update: 40% chance.  I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this.  I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago. 
Remmelt70

I'm also feeling less "optimistic" about an AI crash given:

  1. The election result involving a bunch of tech investors and execs pushing for influence through Trump's campaign (with a stated intention to deregulate tech).
  2. A military veteran saying that the military could be holding up the AI industry like "Atlas holding the globe", and an AI PhD saying that hyperscaled data centers, deep learning, etc, could be super useful for war.

I will revise my previous forecast back to 80%+ chance.

Remmelt10

Yes, I agree formalisation is needed. See comment by flandry39 in this thread on how one might go about doing so. 

Worth considering is that there are actually two aspects that make it hard to define the term ‘alignment’ such to allow for sufficiently rigorous reasoning:

  1. It must allow for logically valid reasoning (therefore requiring formalisation).
  2. It must allow for empirically sound reasoning (ie. the premises correspond with how the world works). 

In my reply above, I did not help you much with (1.). Though even while still using the English lang... (read more)

4harfe
This is maybe not the central point, but I note that your definition of "alignment" doesn't precisely capture what I understand "alignment" or a good outcome from AI to be: AGI could be very catastrophic even when it stops existing a year later. If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless. I don't know whether that covers "humans can survive on mars with a space-suit", but even then, if humans evolve/change to handle situations that they currently do not survive under, that could be part of an acceptable outcome.
Remmelt10

For an overview of why such a guarantee would turn out impossible, suggest taking a look at Will Petillo's post Lenses of Control.

Remmelt1-2

Defining alignment (sufficiently rigorous so that a formal proof of (im)possibility of alignment is conceivable) is a hard thing!

It's less hard than you think, if you use a minimal-threshold definition of alignment: 

That "AGI" continuing to exist, in some modified form, does not result eventually in changes to world conditions/contexts that fall outside the ranges that existing humans could survive under. 

1harfe
This is not a formal definition. Your English sentence has no apparent connection to mathematical objects, which would be necessary for a rigorous and formal definition.
Remmelt10

Yes, I think there is a more general proof available. This proof form would combine limits to predictability and so on, with a lethal dynamic that falls outside those limits.

Remmelt10

The question is more if it can ever be truly proved at all, or if it doesn't turn out to be an undecidable problem.

Control limits can show that it is an undecidable problem. 

A limited scope of control can in turn be used to prove that a dynamic convergent on human-lethality is uncontrollable. That would be a basis for an impossibility proof by contradiction (cannot control AGI effects to stay in line with human safety).

Remmelt30

Awesome directions. I want to bump this up.
 

This might include AGI predicting its own future behaviour, which is kind of essential for it to stick to a reliably aligned course of action.

There is a simple way of representing this problem that already shows the limitations. 

Assume that AGI continues to learn new code from observations (inputs from the world) – since learning is what allows the AGI to stay autonomous and adaptable in acting across changing domains of the world.

Then in order for AGI code to be run to make predictions about relev... (read more)

Remmelt10

Just found your insightful comment. I've been thinking about this for three years. Some thoughts expanding on your ideas:
 

my idea is more about whether alignment could require that the AGI is able to predict its own results and effects on the world (or the results and effects of other AGIs like it, as well as humans)...

In other words, alignment requires sufficient control. Specifically, it requires AGI to have a control system with enough capacity to detect, model, simulate, evaluate, and correct outside effects propagated by the AGI's own components.... (read more)

Remmelt10

No actually, assuming the machinery has a hard substrate and is self-maintaining is enough. 

Remmelt10

we could create aligned ASI by simulating the most intelligent and moral people

This is not an existence proof, because it does not take into account the difference in physical substrates.

Artificial General Intelligence would be artificial, by definition. In fact, what allows for the standardisation of hardware components is the fact that the (silicon) substrate is hard under human living temperatures and pressures. That allows for configurations to stay compartmentalised and stable.

Human “wetware” has a very different substrate. It’s a soup of bouncing org... (read more)

Remmelt10

Just found a podcast on OpenAI’s bad financial situation.

It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).

https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/

Remmelt10

Noticing no response here after we addressed superficial critiques and moved to discussing the actual argument.

For those few interested in questions raised above, Forrest wrote some responses: http://69.27.64.19/ai_alignment_1/d_241016_recap_gen.html

The claims made will feel unfamiliar and the reasoning paths too. I suggest (again) taking the time to consider what is meant. If a conclusion looks intuitively wrong from some AI Safety perspective, it may be valuable to explicitly consider the argumentation and premises behind that. 

Remmelt10

BTW if anyone does want to get into the argument, Will Petillo’s Lenses of Control post is a good entry point. 

It’s concise and correct – a difficult combination to achieve here. 

Remmelt21

Resonating with you here!  Yes, I think autonomous corporations (and other organisations) would result in society-wide extraction, destabilisation and totalitarianism.

2[anonymous]
Thanks! I should have been more clear that the trajectory toward level 5 (with all human virtue/trust being hackable for instrumental gains) itself is concerning, not just the eventual leap when it gets there.
Remmelt126

Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.


Very much agreeing with this.

Load More