tl;dr:

From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t:

Either

  1. Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.

    Or

  2. If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.

 

Pausing AI

There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.

 

I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning.

Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example, Eliezer Yudkowsky and the Future of Life Institute. Neither of them is saying “support PauseAI!”. Why is that?

One possibility I could imagine: Could it be advantageous to hide “maybe we should slow down on AI” in the depths of your writing instead of shouting “Pause AI! Refer to [organization] to learn more!”?

 

Another possibility is that the majority opinion is actually something like “AI progress shouldn’t be slowed down” or “we can do better than lobbying for a pause” or something else I am missing. This would explain why people neither support PauseAI nor see this as a problem to be addressed.

Even if you believe there is a better, more complicated way out of AI existential risk, the pausing AI approach is still a useful baseline: Whatever your plan is, it should be better than pausing AI and it should not have bigger downsides than pausing AI has. There should be legible arguments and a broad consensus that your plan is better than pausing AI. Developing the ability to pause AI is also an important fallback option in case other approaches fail. PauseAI calls this “Building the Pause Button”:

Some argue that it’s too early to press the Pause Button (we don’t), but most experts seem to agree that it may be good to pause if developments go too fast. But as of now we do not have a Pause Button. So we should start thinking about how this would work, and how we can implement it.

 

Some info about myself: I'm a computer science student and familiar with the main arguments of AI Safety: I have read a lot of Eliezer Yudkowsky and did the AISF course reading and exercises. I have watched Robert Miles videos.

 

My conclusion is that either

  1. Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.

    Or

  2. If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.

 

Why is (1) not happening and (2) not being worked on?

How much of a consensus is there on pausing AI?

New Answer
New Comment

10 Answers sorted by

1a3orn

*

4725

Let's look at the two horns of the dilemma, as you put it:

  • Why do many people who want to pause AI not support the organization "PauseAI"?
  • Why would the organization "PauseAI" not change itself so that people who want to pause AI can support it?

Well, here are some reasons someone who wants pause AI might not want to support the organization PauseAI:

  • When you visit the website for PauseAI, you might find some very steep proposals for Pausing AI -- such as requiring the "Granting [of] approval for new training runs of AI models above a certain size (e.g. 1 billion parameters)" or "Banning the publication of such algorithms" that improve AI performance or prohibiting the training of models that "are expected to exceed a score of 86% on the MMLU benchmark" unless their safety can be guaranteed. Implementing these measures would be really hard -- a one-billion parameter model is quite small (I could train one); banning the publication of information on this stuff would be considered by many an infringement on freedom of speech; and there are tons of models now that do better than 86% on the MMLU and have done no harm.

So, if you think the specific measures proposed by them would limit an AI that even many pessimists would think is totally ok and almost risk-free, then you might not want to push for these proposals but for more lenient proposals that, because they are more lenient, might actually get implemented. To stop asking for the sky and actually get something concrete.

  • If you look at the kind of claims that PauseAI makes in their risks page, you might believe that some of them seem exaggerated, or that PauseAI is simply throwing all the negative things they can find about AI into big list to make it seem bad. If you think that credibility is important to the effort to pause AI, then PauseAI might seem very careless about truth in a way that could backfire.

So, this is why people who want to pause AI might not want to support PauseAI.

And, well, why wouldn't pause AI want to change?

Well -- I'm gonna speak broadly -- if you look at the history of PauseAI, they are marked by belief that the measures proposed by others are insufficient for Actually Stopping AI -- for instance the kind of policy measures proposed by people working at AI companies isn't enough; that the kind of measures proposed by people funded by OpenPhil are often not enough; and so on. Similarly, they often believe that people who are talking about these claims are nitpicking, and so on. (Citation needed.)

I don't think this dynamic is rare. Many movements have "radical wings," that more moderate organizations in the movement would characterize as having impracticable maximalist policy goals and careless epistemics. And the radical wings would of course criticize back that the "moderate wings" have insufficient or cowardly policy goals and epistemics optimized for respectability and not not truth. And the conflicts between them are intractable because people cannot move away from these prior beliefs about their interlocutors; in this respect the discourse around PauseAI seems unexceptionable and rather predictable.

Well -- I'm gonna speak broadly -- if you look at the history of PauseAI, they are marked by belief that the measures proposed by others are insufficient for Actually Stopping AI -- for instance the kind of policy measures proposed by people working at AI companies isn't enough; that the kind of measures proposed by people funded by OpenPhil are often not enough; and so on.

They are correct as far as I can tell. Can you identify a policy measure proposed by an AI company or an OpenPhil-funded org that you think would be sufficient to stop unsafe AI devel... (read more)

5Davidmanheim
"sufficient to stop unsafe AI development? I think there is indeed exactly one such policy measure, which is SB 1047," I think it's obviously untrue that this would stop unsafe AI - it is as close as any measure I've seen, and would provide some material reduction in risk in the very near term, but (even if applied universally, and no-one tried to circumvent it,) it would not stop future unsafe AI.
4MichaelDickens
Yeah I actually agree with that, I don't think it was sufficient, I just think it was pretty good. I wrote the comment too quickly without thinking about my wording.
0Tao Lin
EU AI Code of Practice is better, a little closer to stopping ai development
3Davidmanheim
Disagree that it could stop dangerous work, and doubly disagree given the way things are headed, especially with removing whistleblower protections and the lack of useful metrics for compliance. I don't think it would even be as good as SB-1047, even in the amended weaker form. I was previously more hopeful that if the EU COP was a strong enough code, then when things inevitably went poorly anyways we could say "look, doing pretty good isn't enough, we need to actually regulate specific parts of this dangerous technology," but I worry that it's not even going to be strong enough to make that argument.

If you look at the kind of claims that PauseAI makes in their risks page, you might believe that some of them seem exaggerated, or that PauseAI is simply throwing all the negative things they can find about AI into big list to make it see bad. If you think that credibility is important to the effort to pause AI, then PauseAI might seem very careless about truth in a way that could backfire.

A couple notes on this:

  • AFAICT PauseAI US does not do the thing you describe.
  • I've looked at a good amount of research on protest effectiveness. There are many obser
... (read more)
51a3orn
I'm not trying to get into the object level here. But people could both: * Believe that making such hard-to-defend claims could backfire, disagreeing with those experiments that you point out or * Believe that making such claims violates virtue-ethics-adjacent commitments to truth or * Just not want to be associated, in an instinctive yuck kinda way, with people who make these kinds of dubious-to-them claims. Of course people could be wrong about the above points. But if you believed these things, then they'd be intelligible reasons not to be associated with someone, and I think a lot of the claims PauseAI makes are such that a large number of people people would have these reactions.

When you visit the website for PauseAI, you might find some very steep proposals for Pausing AI [...] (I could train one)

Their website is probably outdated. I read their proposals as “keep the current level of AI, regulate stronger AI”. Banning current LLaMA models seems silly from an x-risk perspective, in hindsight. I think PauseAI is perfectly fine with pausing “too early”, which I personally don't object to.

 

If you look at the kind of claims that PauseAI makes in their risks page

PauseAI is clearly focused on x-risk. The risks page seems like an at... (read more)

MichaelDickens

243

I feel kind of silly about supporting PauseAI. Doing ML research, or writing long fancy policy reports feels high status. Public protests feel low status. I would rather not be seen publicly advocating for doing something low-status. I suspect a good number of other people feel the same way.

(I do in fact support PauseAI US, and I have defended it publicly because I think it's important to do so, but it makes me feel silly whenever I do.)

That's not the only reason why people don't endorse PauseAI, but I think it's an important reason that should be mentioned.

I notice they have a Why do you protest section in their FAQ. I hadn't heard of these studies before

Regardless, I still think there's room to make protests cooler and more fun and less alienating, and when I mentioned this to them they seemed very open to it.

mako yass

204

Personally, because I don't believe the policy in the organization's name is viable or helpful.

As to why I don't think it's viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US's only clear economic lead over the rest of the world.

If you attempted a pause, I think it wouldn't work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn't the frontier. The current apparent absence of frontier AI research in a military context is miraculous, strange, and fragile. If you pause in the private context (which is probably all anyone could do) defence AI will become the frontier in about three years, and after that I don't think any further pause is possible because it would require a treaty against secret military technology R&D. Military secrecy is pretty strong right now. Hundreds of billions yearly is known to be spent on mostly secret military R&D, probably more is actually spent.
(to be interested in a real pause, you have to be interested in secret military R&D. So I am interested in that, and my position right now is that it's got hands you can't imagine)

To put it another way, after thinking about what pausing would mean, it dawned on me that pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research or to approach the development of AI with a humanitarian perspective. It seems to me like the movement has already ossified a slogan that makes no sense in light of the complex and profane reality that we live in, which is par for the course when it comes to protest activism movements.

pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research

I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground ... (read more)

The Trump-Vance administration's support base is suspicious of academia, and has been willing to defund scientific research of the grounds of it being too left-wing. There is a schism emerging between multiple factions of the right-wing, the right-wingers that are more tech-oriented and the ones that are nation/race-oriented (the H1B visa argument being an example). This could lead to a decrease in support for AI in the future.

Another possibility is that the United States could lose global relevance due to economic and social pressures from the outside world, and from organizational mismanagement and unrest from within. Then the AI industry could move to the UK/EU, turning the main players in AI to the UK/EU and China.

A relevant FAQ entry: AI development might go underground

I think I disagree here:

By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.

This will change/is only the case for frontier development. I also think we're probably in the hardware overhang. I don't think there is anything inherently difficult to hide about AI, that's likely just a fact about the present iteration of AI.

But I... (read more)

Julian Bradshaw

17-2

I think the concept of Pausing AI just feels unrealistic at this point.

  1. Previous AI safety pause efforts (GPT-2 release delay, 2023 Open Letter calling for a 6 month pause) have come to be seen as false alarms and overreactions
  2. Both industry and government are now strongly committed to an AI arms race
  3. A lot of the non-AI-Safety opponents of AI want a permanent stop/ban in the fields they care about, not a pause, so it lacks for allies
  4. It's not clear that meaningful technical AI safety work on today's frontier AI models could have been done before they were invented; therefore a lot of technical AI safety researchers believe we still need to push capabilities further before a pause would truly be useful 

PauseAI could gain substantial support if there's a major AI-caused disaster, so it's good that some people are keeping the torch lit for that possibility, but supporting it now means burning political capital for little reason. We'd get enough credit for "being right all along" just by having pointed out the risks ahead of time, and we want to influence regulation/industry now, so we shouldn't make Pause demands that get you thrown out of the room. In an ideal world we'd spend more time understanding current models, though.

supporting it now means burning political capital for little reason


I think this is wrong - the cost in political capital for saying that it's the best solution seems relatively low, especially if coupled with an admission that it's not politically viable. What I see instead is people dismissing it as a useful idea even in theory, saying it would be bad if it were taken seriously by anyone, and moving on from there. And if nothing else, that's acting as a way to narrow the Overton window for other proposals!

3Julian Bradshaw
I'm generally pretty receptive to "adjust the Overton window" arguments, which is why I think it's good PauseAI exists, but I do think there's a cost in political capital to saying "I want a Pause, but I am willing to negotiate". It's easy for your opponents to cite your public Pause support and then say, "look, they want to destroy America's main technological advantage over its rivals" or "look, they want to bomb datacenters, they're unserious". (yes Pause as typically imagined requires international treaties, the attack lines would probably still work, there was tons of lying in the California SB 1047 fight and we lost in the end) The political position AI safety has mostly taken instead on US regulation is "we just want some basic reporting and transparency" which is much harder to argue against, achievable, and still pretty valuable. I can't say I know for sure this is the right approach to public policy. There's a reason politics is a dark art, there's a lot of triangulating between "real" and "public" stances, and it's not costless to compromise your dedication to the truth like that. But I think it's part of why there isn't as much support for PauseAI as you might expect. (the other main part being what 1a3orn says, that PauseAI is on the radical end of opinions in AI safety and it's natural there'd be a gap between moderates and them)
2Davidmanheim
Very briefly, the fact that "The political position AI safety has mostly taken" is a single stance is evidence that there's no room for even other creative solutions, so we've failed hard at expanding that Overton window. And unless you are strongly confident in that as the only possibly useful strategy, that is a horribly bad position for the world to be in as AI continues to accelerate and likely eliminate other potential policy options.

Zach Stein-Perlman

*

175

A. Many AI safety people don't support relatively responsible companies unilaterally pausing, which PauseAI advocates. (Many do support governments slowing AI progress, or preparing to do so at a critical point in the future. And many of those don't see that as tractable for them to work on.)

B. "Pausing AI" is indeed more popular than PauseAI, but it's not clearly possible to make a more popular version of PauseAI that actually does anything; any such organization will have strategy/priorities/asks/comms that alienate many of the people who think "yeah I support pausing AI."

C. 

There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.

This seems confused. Obviously P(doom | no slowdown) < 1. Many people's work reduces risk in both slowdown and no-slowdown worlds, and it seems pretty clear to me that most of them shouldn't switch to working on increasing P(slowdown).

B. "Pausing AI" is indeed more popular than PauseAI, but it's not clearly possible to make a more popular version of PauseAI that actually does anything; any such organization will have strategy/priorities/asks/comms that alienate many of the people who think "yeah I support pausing AI."

This strikes me as a very strange claim. You're essentially saying, even if a general policy is widely supported, it's practically impossible to implement any specific version of that policy? Why would that be true?

For example I think a better alternative to "nobody fund... (read more)

2Davidmanheim
Banning nuclear weapons is exactly like this. If it could be done universally and effectively, it would be great, but any specific version seems likely to tilt the balance of power without accomplishing the goal. That's kind-of what happened with the anti-nuclear movement, but it ended up doing lots of harm because the things that could be stopped were the good ones!
1MichaelDickens
The global stockpile of nuclear weapons is down 6x since its peak in 1986. Hard to attribute causality but if the anti-nuclear movement played a part in that, then I'd say it was net positive. (My guess is it's more attributable to the collapse of the Soviet Union than to anything else, but the anti-nuclear movement probably still played some nonzero role)
2Davidmanheim
I'm sure it played some nonzero role, but is it anything like enough of an impact, and enough of a role to compensate for all the marginal harms of global warming because of stopping deployment of nuclear power (which they are definitely largely responsible for)?

Obviously P(doom | no slowdown) < 1.


You think it's obviously materially less? Because there is a faction, including Eliezer and many others, that think it's epsilon, and claim that the reduction in risk from any technical work is less than the acceleration it causes. (I think you're probably right about some of that work, but I think it's not at all obviously true!)

Thank you for responding!

A: Yeah. I'm mostly positive about their goal to work towards "building the Pause button". I think protesting against "relatively responsible companies" makes a lot of sense when these companies seem to use their lobbying power more against AI-Safety-aligned Governance than in favor of it. You're obviously very aware of the details here.

B: I asked my question because I'm frustrated with that. Is there a way for AI Safety to coordinate a better reaction?

C:

There does not seem to be a legible path to prevent possible existential risks

... (read more)

Buck

94

Some quick takes:

  • "Pause AI" could refer to many different possible policies.
  • I think that if humanity avoided building superintelligent AI, we'd massively reduce the risk of AI takeover and other catastrophic outcomes.
  • I suspect that at some point in the future, AI companies will face a choice between proceeding more slowly with AI development than they're incentivized to, and proceeding more quickly while imposing huge risks. In particular, I suspect it's going to be very dangerous to develop ASI.
  • I don't think that it would be clearly good to pause AI development now. This is mostly because I don't think that the models being developed literally right now pose existential risk.
  • Maybe it would be better to pause AI development right now because this will improve the situation later (e.g. maybe we should pause until frontier labs implement good enough security that we can be sure their Slack won't be hacked again, leaking algorithmic secrets). But this is unclear and I don't think it immediately follows from "we could stop AI takeover risk by pausing AI development before the AIs are able to take over".
  • Many of the plausible "pause now" actions seem to overall increase risk. For example, I think it would be bad for relatively responsible AI developers to unilaterally pause, and I think it would probably be bad for the US to unilaterally force all US AI developers to pause if they didn't simultaneously somehow slow down non-US development.
    • (They could slow down non-US development with actions like export controls.)
  • Even in the cases where I support something like pausing, it's not clear that I want to spend effort on the margin actively supporting it; maybe there are other things I could push on instead that have better ROI.
  • I'm not super enthusiastic about PauseAI the organization; they sometimes seem to not be very well-informed, they sometimes argue for conclusions that I think are wrong, and I find Holly pretty unpleasant to interact with, because she seems uninformed and prone to IMO unfair accusations that I'm conspiring with AI companies. My guess is that there could be an organization with similar goals to PauseAI that I felt much more excited for.

I think it would probably be bad for the US to unilaterally force all US AI developers to pause if they didn't simultaneously somehow slow down non-US development.

It seems to me that to believe this, you have to believe all of these four things are true:

  1. Solving AI alignment is basically easy
  2. Non-US frontier AI developers are not interested in safety
  3. Non-US frontier AI developers will quickly catch up to the US
  4. If US developers slow down, then non-US developers are very unlikely to also slow down—either voluntarily, or because the US strong-arms them i
... (read more)
5Buck
I disagree that you have to believe those four things in order to believe what I said. I believe some of those and find others too ambiguously phrased to evaluate. Re your model: I think your model is basically just: if we race, we go from 70% chance that US "wins" to a 75% chance the US wins, and we go from a 50% chance of "solving alignment" to a 25% chance? Idk how to apply that here: isn't your squiggle model talking about whether racing is good, rather than whether unilaterally pausing is good? Maybe you're using "race" to mean "not pause" and "not race" to mean "pause"; if so, that's super confusing terminology. If we unilaterally paused indefinitely, surely we'd have less than 70% chance of winning. In general, I think you're modeling this extremely superficially in your comments on the topic. I wish you'd try modeling this with more granularity than "is alignment hard" or whatever. I think that if you try to actually make such a model, you'll likely end up with a much better sense of where other people are coming from. If you're trying to do this, I recommend reading posts where people explain strategies for passing safely through the singularity, e.g. like this.
1MichaelDickens
Yes the model is more about racing than about pausing but I thought it was applicable here. My thinking was that there is a spectrum of development speed with "completely pause" on one end and "race as fast as possible" on the other. Pushing more toward the "pause" side of the spectrum has the ~opposite effect as pushing toward the "race" side. 1. I've never seen anyone else try to quantitatively model it. As far as I know, my model is the most granular quantitative model ever made. Which isn't to say it's particularly granular (I spent less than an hour on it) but this feels like an unfair criticism. 2. In general I am not a fan of criticisms of the form "this model is too simple". All models are too simple. What, specifically, is wrong with it? I had a quick look at the linked post and it seems to be making some implicit assumptions, such as 1. the plan of "use AI to make AI safe" has a ~100% chance of working (the post explicitly says this is false, but then proceeds as if it's true) 2. there is a ~100% chance of slow takeoff 3. if you unilaterally pause, this doesn't increase the probability that anyone else pauses, doesn't make it easier to get regulations passed, etc. I would like to see some quantification of the from "we think there is a 30% chance that we can bootstrap AI alignment using AI; a unilateral pause will only increase the probability of a global pause by 3 percentage points; and there's only a 50% chance that the 2nd-leading company will attempt to align AI in a way we'd find satisfactory, therefore we think the least-risky plan is to stay at the front of the race and then bootstrap AI alignment." (Or a more detailed version of that.)

I think we basically agree, but I think the Overton window needs to be expanded, and Pause is (unfortunately) already outside that window. So I differentiate between the overall direction, which I support strongly, and the concrete proposals and the organizations involved.

Mis-Understandings

3-4

How much of a consensus is there on pausing AI

Not much compared to the push to get the stuff that already exists out to full deployment (For various institutions this is meaningful impact on profit margins). 

People don't want to fight that, even if they think that further capabilities are bad price/risk/benefits tradeoff.

There is a co-ordination problem where if you ask to pause and people say no, you can't make other asks.

3rd. They might just not mesh/trust that particular movement and the consolidation of platform it represents, and so want to make points on their own instead of joining a bigger organizations demands. 

Noosphere89

20

One particular reason that I haven't seen addressed very much in why I don't support/endorse PauseAI, beyond the usual objections, is that there probably aren't going to be that many warning shots that can actually affect policy, at least conditional on misalignment being a serious problem (which doesn't translate to >50% probability of doom), because the most likely takeover plan (at least assuming no foom/software intelligence explosion) fundamentally relies not on killing people, but on launching internal rogue deployments to sabotage alignment work and figuring out a way to control the AI company's compute, since catastrophe/existential risk is much harder than launching a internal rogue deployment (without defenses).

So PauseAI's theory of change fundamentally requires that we live in worlds where both alignment is hard and effective warning shots exist, and these conditions are quite unlikely to be true, especially given that pausing is likely not the most effective action you could be doing from a comparative advantage perspective.

I'm not going to say that PauseAI is net-negative, and it has positive expected value, but IMO it's far less than a lot of pause advocates say:

https://www.lesswrong.com/posts/rZcyemEpBHgb2hqLP/ai-control-may-increase-existential-risk#jChY95BeDeptDpnZK

Important part of the comment:

I think most of the effective strategies for AIs seeking power don't involve escalating to something which is much more likely to trigger a strong response than "the AI company caught the AI trying to escape". I think the best strategies are things like:

  • Launch a rogue internal deployment.
  • Sabotage a bunch of work done at the AI company. Or possibly some work done externally. This includes stuff like sabotaging alignment work, backdooring robot armies, backdooring future training runs, etc.
  • Escape and then directly try to take over once your chances are sufficiently good that this is better than biding your time.
  • Generally try to manipulate and persuade such that AI takeover is easier and more likely.

Of these, I think only escape could trigger a much stronger response if we catch it after it escalates some rather than before. I don't see how "we caught the AI trying to launch an unmonitored version of itself" is going to play that differently from "we caught that the AI did launch an unmonitored version of itself". Most of these don't escalate in some way which would trigger a response such that catching it after the fact is similar to catching an attempt. (In some cases where reversion is possible like work sabotage, there might be no meaningful distinction.) Further, without some effort on control, we might be much less likely to catch either! And, in some cases, control measures I'm interested in focus on after-the-fact detection.

Adam Kaufman

10

I think AI safety has very limited political capital at the moment. Pausing AI just isn’t going to happen, so advocating for it makes you sound unreasonable and allows people to comfortably ignore your other opinions. I prefer trying to push for interventions which make a difference with much less political capital, like convincing frontier labs to work on and implement control measures.

Prometheus

0-1

I don't think survivable worlds, at our point in time, involve something like PauseAI. I don't condemn them, and welcome people to try. But it's feeling more and more like Hiroo Onoda, continuing to fight guerilla warfare in the Philipines for decades, refusing to believe the war was over.

4 comments, sorted by Click to highlight new comments since:

Quick list of reasons for me:

  • I'm averse to attending mass protests myself because they make it harder to think clearly and I usually don't agree with everything any given movement stands for.
  • Under my worldview, an unconditional pause is a much harder ask than is required to save most worlds if p(doom) is 14% (the number stated on the website). It seems highly impractical to implement compared to more common regulatory frameworks and is also super unaesthetic because I am generally pro-progress.
  • The economic and political landscape around AI is complicated enough that agreeing with their stated goals is not enough; you need to agree with their theory of change.
    • Broad public movements require making alliances which can be harmful in the long term. Environmentalism turned anti-nuclear, a decades-long mistake which has accelerated climate change by years. PauseAI wants to include people who oppose AI on its present dangers, which makes me uneasy. What if the landscape changes such that the best course of action is contrary to PauseAI's current goals?
  • I think PauseAI's theory of change is weak
    • From reading the website, they want to leverage protests, volunteer lobbying, and informing the public into an international treaty banning superhuman AI and a unilateral supply-chain pause. It seems hard for the general public to have significant influence over this kind of issue unless AI rises to the top issue for most Americans, since the current top issue is improving the economy, which directly conflicts with a pause.
  • There are better theories of change
    • Strengthening RSPs into industry standards, then regulations.
    • Directly informing elites about the dangers of AI, rather than the general public.
  • History (e.g. civil rights movement) shows that moderates not publicly endorsing radicals can result in a positive radical flank effect making moderates' goals easier to achieve.

One frustration I have about people on LessWrong and elsewhere is that people love criticizing every advice/strategy, while never truly supporting any alternatives.

Most upvoted comments here argue against PauseAI, or even claim that asking for a pause overall is a waste of political capital...!

Yet I remember when I proposed an open letter arguing for government funding for AI alignment, the Statement on AI Inconsistency. After writing emails and private messages, the only reply was "sorry, this strategy isn't good, because we should just focus on pausing AI."

I feel my open letter is more likely to succeed than pausing AI (I'm demanding that the AI alignment budget be "belief-consistent" with the military budget).

When politicians reject pausing AI, they just need the easy belief of "China must not win," or "if we don't do it someone else will." But for politicians to reject my open letter, they need the difficult belief of being 99.999% sure of no AI catastrophe, thus 99.95% sure most experts are wrong.

Regardless, where are the people who favour the middle ground? Who neither argue that "asking for a pause is a waste of political capital because it's hopeless," nor argue that "asking for government funding is a waste of time, because we should just focus on pausing AI?"

It's like the status game of criticism, that Wei Dai pointed out.

I think it's plausible that a system which is smarter than humans/humanity (and distinct and separate from humans/humanity) should just never be created, and I'm inside-view almost certain it'd be profoundly bad if such a system were created any time soon. But I think I'll disagree with like basically anyone on a lot of important stuff around this matter, so it just seems really difficult for anyone to be such that I'd feel like really endorsing them on this matter?[1] That said, my guess is that PauseAI is net positive, tho I haven't thought about this that much :)


  1. https://youtu.be/Q3_7HTruMfg ↩︎

Supporting PauseAI makes sense only if you think it might succeed, if you think the chances are roughly 0 then it might be some cost (reputation etc) without any real profit.

Curated and popular this week