tl;dr:
From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t:
Either
Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.
Or
- If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.
Pausing AI
There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.
I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning.
Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example, Eliezer Yudkowsky and the Future of Life Institute. Neither of them is saying “support PauseAI!”. Why is that?
One possibility I could imagine: Could it be advantageous to hide “maybe we should slow down on AI” in the depths of your writing instead of shouting “Pause AI! Refer to [organization] to learn more!”?
Another possibility is that the majority opinion is actually something like “AI progress shouldn’t be slowed down” or “we can do better than lobbying for a pause” or something else I am missing. This would explain why people neither support PauseAI nor see this as a problem to be addressed.
Even if you believe there is a better, more complicated way out of AI existential risk, the pausing AI approach is still a useful baseline: Whatever your plan is, it should be better than pausing AI and it should not have bigger downsides than pausing AI has. There should be legible arguments and a broad consensus that your plan is better than pausing AI. Developing the ability to pause AI is also an important fallback option in case other approaches fail. PauseAI calls this “Building the Pause Button”:
Some argue that it’s too early to press the Pause Button (we don’t), but most experts seem to agree that it may be good to pause if developments go too fast. But as of now we do not have a Pause Button. So we should start thinking about how this would work, and how we can implement it.
Some info about myself: I'm a computer science student and familiar with the main arguments of AI Safety: I have read a lot of Eliezer Yudkowsky and did the AISF course reading and exercises. I have watched Robert Miles videos.
My conclusion is that either
Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.
Or
- If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.
Why is (1) not happening and (2) not being worked on?
How much of a consensus is there on pausing AI?
Their website is probably outdated. I read their proposals as “keep the current level of AI, regulate stronger AI”. Banning current LLaMA models seems silly from an x-risk perspective, in hindsight. I think PauseAI is perfectly fine with pausing “too early”, which I personally don't object to.
PauseAI is clearly focused on x-risk. The risks page seems like an attempt to guide the general public from naively-realistic "Present dangers" slowly towards introducing (exotic-sounding) x-risk. You can disagree with that approach, of course. I would disagree that mixing AI Safety and AI Ethics is being "very careless about truth".
Thank you for answering my question! I wanted to know what you people think about PauseAI, so this fits well.
Yes. I hope we can be better at coordination... I would frame PauseAI as "the reasonable [aspiring] mass-movement". I like that it is easy to support or join PauseAI even without having an ML PhD. StopAI is an organization more radical than them.