tl;dr:
From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t:
Either
Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.
Or
- If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.
Pausing AI
There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.
I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning.
Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example, Eliezer Yudkowsky and the Future of Life Institute. Neither of them is saying “support PauseAI!”. Why is that?
One possibility I could imagine: Could it be advantageous to hide “maybe we should slow down on AI” in the depths of your writing instead of shouting “Pause AI! Refer to [organization] to learn more!”?
Another possibility is that the majority opinion is actually something like “AI progress shouldn’t be slowed down” or “we can do better than lobbying for a pause” or something else I am missing. This would explain why people neither support PauseAI nor see this as a problem to be addressed.
Even if you believe there is a better, more complicated way out of AI existential risk, the pausing AI approach is still a useful baseline: Whatever your plan is, it should be better than pausing AI and it should not have bigger downsides than pausing AI has. There should be legible arguments and a broad consensus that your plan is better than pausing AI. Developing the ability to pause AI is also an important fallback option in case other approaches fail. PauseAI calls this “Building the Pause Button”:
Some argue that it’s too early to press the Pause Button (we don’t), but most experts seem to agree that it may be good to pause if developments go too fast. But as of now we do not have a Pause Button. So we should start thinking about how this would work, and how we can implement it.
Some info about myself: I'm a computer science student and familiar with the main arguments of AI Safety: I have read a lot of Eliezer Yudkowsky and did the AISF course reading and exercises. I have watched Robert Miles videos.
My conclusion is that either
Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.
Or
- If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.
Why is (1) not happening and (2) not being worked on?
How much of a consensus is there on pausing AI?
Let's look at the two horns of the dilemma, as you put it:
Well, here are some reasons someone who wants pause AI might not want to support the organization PauseAI:
So, if you think the specific measures proposed by them would limit an AI that even many pessimists would think is totally ok and almost risk-free, then you might not want to push for these proposals but for more lenient proposals that, because they are more lenient, might actually get implemented. To stop asking for the sky and actually get something concrete.
So, this is why people who want to pause AI might not want to support PauseAI.
And, well, why wouldn't pause AI want to change?
Well -- I'm gonna speak broadly -- if you look at the history of PauseAI, they are marked by belief that the measures proposed by others are insufficient for Actually Stopping AI -- for instance the kind of policy measures proposed by people working at AI companies isn't enough; that the kind of measures proposed by people funded by OpenPhil are often not enough; and so on. Similarly, they often believe that people who are talking about these claims are nitpicking, and so on. (Citation needed.)
I don't think this dynamic is rare. Many movements have "radical wings," that more moderate organizations in the movement would characterize as having impracticable maximalist policy goals and careless epistemics. And the radical wings would of course criticize back that the "moderate wings" have insufficient or cowardly policy goals and epistemics optimized for respectability and not not truth. And the conflicts between them are intractable because people cannot move away from these prior beliefs about their interlocutors; in this respect the discourse around PauseAI seems unexceptionable and rather predictable.
Their website is probably outdated. I read their proposals as “keep the current level of AI, regulate stronger AI”. Banning current LLaMA models seems silly from an x-risk perspective, in hindsight. I think PauseAI is perfectly fine with pausing “too early”, which I personally don't object to.
PauseAI is clearly focused on x-risk. The risks page seems like an at... (read more)