I sometimes wonder about this. This post does pose the question, but I don't think it gives an analysis that could make me change my mind on anything, it's too shallow and not adversarial.
I am also not impressed with the pause AI movement and am concerned about AI safety. To me focusing on AI companies and training FLOPS is not the best way to do things. Caps on data center sizes and worldwide GPU production caps would make more sense to me. Pausing software but not hardware gives more time for alignment but makes a worse hardware overhang. I don't think thats helpful. Also they focus too much on OpenAI from what I've seen. xAI will soon have the largest training center for a start.
I don't think this is right or workable https://pauseai.info/proposal - figure out how biological intelligence learns and you don't need a large training run. There's no guarantee at all that a pause at this stage can help align super AI. I think we need greater capabilities to know what we are dealing with. Even with a 50 year pause to study GPT4 type models I wouldn't be confident we could learn enough from that. They have no realistic way to lift the pause, so its a desire to stop AI indefinitely.
"There will come a point where potentially superintelligent AI models can be trained for a few thousand dollars or less, perhaps even on consumer hardware. We need to be prepared for this."
You can't prepare for this without first having superintelligent models running on the most capable facilities then having already gone through a positive Singularity. They have no workable plan for achieving a positive Singularity, just try to stop and hope.
I’m on the side of AI Safety. AI has a good chance of ending the human species if we don’t do it right, and Not Doing It Right is the default outcome unless we’re very cautious.
PauseAI is an activist group that advocates for AI Safety. They adopt the language, aesthetics, and tactics of “activism.” The splash page of their website demands “DON’T LET AI COMPANIES GAMBLE WITH OUR FUTURE”. They encourage people to add a Pause symbol to their online handles. They can be seen protesting outside AI company HQs.
I agree with their goal. I’ve met some of them and they are good, thoughtful people. In most ways we are aligned. However I think they are harmful for the cause of AI Safety and they should drastically change everything they’re doing.
Increasingly activists are seen as defect-bots. Activists are frequently found to:
Destroy (or attempt to destroy) irreplaceable artistic works or cultural artifacts.
Shut down public services and thoroughfares.
Suppress scientific findings considered hostile to their message.
Blatantly lie about the words and actions of their opponents.
Mandate educational programs that denigrate and insult their participants.
In extreme cases they’ll actually celebrate the destruction of significant parts of cities/neighborhoods, and celebrate summary execution carried out by lunatics.
Activists come with an aura of low-key evil. They cannot be trusted with things you value, because everything is thought of as a tool to further the cause. You cannot believe what they say, because they care far more about their cause than about truth.
PauseAI does not do this, to be clear. But PauseAI embraces the aesthetics of groups that do these things. Therefore they associate the cause of AI Safety with activism. This will soon repel more people than it draws, if it doesn’t do so already. Resources being spent on PauseAI make the public less trusting of AI Safety arguments and should be diverted to almost anything else.