In the olden days, Yudkowsky and Bostrom warned people about the risks associated with developing powerful AI. Many people listened and went "woah, AI is dangerous, we better not build it". A few people went "woah, AI is powerful, I better be the one to build it". And we've got the AI race we have today, where a few organizations (bootstrapped with EA funding) are functionally trying to kill literally everyone, but at least we also have a bunch of alignment researchers trying to save the world before they do.

I don't think that that first phase of advocacy was net harm, compared to inaction. We have a field of alignment at all, with (by my vague estimate) maybe a dozen or so researchers actually focused on the parts of the problem that matter; plausibly, that's a better chance than the median human-civilization-timeline gets.

But now, we're trying to make politicians take AI risks seriously. Politicians who don't even have very basic rationalist training against cognitive biases, come from a highly conflict-theoritic perspective full of political pressures, and haven't read the important lesswrong literature. And this is a topic contentious enough that even many EAs/rationalists who have been around for a while and read many of those important posts still feel very confused about the whole thing.

What do we think is going to happen?

I expect that some governments will go "woah, AI is dangerous, we better not build it". And some governments will go "woah, AI is powerful, we better be the ones to build it". And this time, there's a good chance it'll be net harm, because most governments have in fact a lot more power to do bad than good, here. Things could be a lot worse.

(Pause AI advocacy plausibly also puts the attention of a lot of private actors on how dangerous (and thus powerful!) AI can be, which is also bad (maybe worse!). I'm focusing on politicians here because they're the more obvious failure mode.)

Now, the upside of Pause AI advocacy (and other governance efforts) is possibly great! Maybe Pause AI manages to slow down the labs enough to buy us a few years (I currently expect AI to kill literally everyone sometime this decade), and which would be really good for increasing the chances of solving alignment before one of the big AI organizations launch an AI that kills literally everyone. I'm currently about 50:50 on whether Pause AI advocacy is net good or net bad.

Being in favor of Pausing AI is great (I'm definitely in favor of pausing AI!), but it's good to keep in mind that the ways you go about advocating for that can actually have harmful side-effects, and you have to consider the possibility that those harmful side-effects might be worse than your expected gain (what you might gain, multiplied by how likely you are to gain it).

Again, I'm not saying they are worse! I'm saying we should be thinking about whether they are worse.

New Comment
10 comments, sorted by Click to highlight new comments since:
[-]joepio177

If we could travel back in time, and prevent information hazards as "AI can be very powerful" from ever going mainstream, that would probably be a good thing to do. But we live in a world where ChatGPT is the fastest growing app to have ever existed, where the company behind it is publicly stating it wants to build AGI because it can transform all of our lives. Billions are invested in this domain. This meme is already mainstream.

The meme "AI might be disastrous" is luckily also already mainstream. Over 80% worry that AI might cause catastrophic outcomes. The meme "Slowing down progress is good" is mainstream, too. Over 70% of people are in favor of slowing down AI development. Over 60% would support a ban on AI smarter than humans. 

So we're actually on the right track. Advocacy is the thing that got us here - not just talking to the in-crowd about the risks. Geoffrey Hinton quitting, the FLI pause letter, quotes from Elon - these are the things that ended up going mainstream and got people to worry more about all of this. It's not just a small group of LW folks now.

But we're still not there. We need the next memes to become mainstream, too:

  • There is not just one tiny risk, there is a large number of risks, some of which could be default outcomes. Even if you don't believe in AI takeover, you should still consider all the other ways in which AGI could end up horribly.
  • Pausing is possible. It's not easy, but there is nothing inevitable about a small number of AI labs racing towards AGI. It's not as if molecules automatically assemble themselves into GPUs. We can and should regulate strictly, on an international level, and it needs to happen fast. Polls show that normal people already agree that this should happen, but our politicians will not act unless they are thoroughly pushed.
  • Act. We should not wait for things to go wrong, we need to act. Speak up, be honest, and make people understand what needs to happen in order for us all to be safe. Most people here (that includes me) are biased towards thinking, doubting and discussing things. This is what got us to consider these risks in the first place, but it also means that we're very prone to not do anything about it. IMO the largest risk we're facing right now is dying to a lack of sensible action.

However, there are some forms of advocacy that are net-harmful. Violent protests, for example, have shown to diminish support for a topic. This is why we're strictly organising peaceful protests, which are shown to have positive effects on public support

So if you ask me, PauseAI advocacy can be a great way to be productive and mitigate the very worst outcomes, but we'll always need to consider the specific actions themselves.

Disclaimer: I'm the guy who founded PauseAI

governments know now, though. there's no changing that.

I don't think it's a binary; they could still pay less attention!

(plausibly there's a bazillion things constantly trying to grab their attention, so they won't "lock on" if we avoid bringing AI to their attention too much)

to clarify: governments have already put some of their agentic capability towards figuring out the most powerful ways to use ai, and there is plenty of documentation already as to what those are. the documentation is the fuel, and it has already caught on "being used to design war devices" fire.

the question is how do they respond. it's not likely they'll respond well, regardless, of course. I'm more worried about pause regulation itself changing the landscape in a way that causes net acceleration, rather than advocacy for it independent of the enactment of the regulation, which I expect to do relatively little. individual human words mean little next to the might of "hey chatgpt," suddenly being a thing that exists.

I don't think governments have yet committed to trying to train their own state of the art foundation models for military purposes, probably partly because they (sensibly) guess that they would not be able to keep up with the private sector. That means that government interest/involvement has relatively little effect on the pace of advancement of the bleeding edge.

I don't think that that first phase of advocacy was net harm, compared to inaction.

It directly contributed to the founding and initial funding of DeepMind, OpenAI and Anthropic.

I think it was net harmful.

I think posts like this are net harmful, by discouraging people from joining those doing good things without providing an alternative and so wasting energy on meaningless ruminating that doesn't culminate in any useful action.

Tamsin -- interesting points. 

I think it's important for the 'Pause AI' movement (which I support) to help politicians, voter, and policy wonks understand that 'power to do good' is not necessarily correlated with 'power to deter harm' or the 'power to do indiscriminate harm'. So, advocating for caution ('OMG AI is really dangerous!') should not be read as 'power to do good' or 'power to deter harm' -- which could incentivize gov'ts to pursue AI despite the risks.

For example, nuclear weapons can't really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).

Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).

Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.

If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.

that that first phase of advocacy was net harm

typo

I'm interested in what do people think are the best ways of doing advocacy in a way that gives more weight to the risks than the (supposed) benefits.

Talking about all the risks? Focusing on the expert polls instead of the arguments?