The reason this is a difficult question is that we don't know how hard alignment will be. Opinions from different people with best-in-class expertise and time-on-task disagree wildly.
Therefore I'd argue that we should throw effort and funding into resolving that question by putting the reasoning processes of the relevant experts to wider scrutiny, and do a more systematic job of evaluating them.
Funding comes from a different resource pool than regulation, so you might mean which one should get your advocacy efforts. The same arguments apply to both of them, and to the meta-alignment question.
We recently wrote A better “Statement on AI Risk?” an open letter we hope AI experts can sign. One commenter objected, saying that stopping the development of apocalyptic AI is a better focus than asking for AI alignment funding.
Our boring answer was that we think there is little conflict between these goals, and the community can afford to focus on both.
This answer is boring, and won't convince everyone since maybe people think AI regulation/pausing is so much more important, that focus on AI alignment funding distracts away from it, and is therefore counterproductive.
The Question
So how should we weigh the relative importances of AI alignment funding and AI regulation/pausing?
For humanity to survive, we either need to survive ASI by making it aligned/controlled, or avoid building ASI forever (millions of years).
Surviving ASI
To make ASI aligned/controlled, we either need to be lucky, or we need to get alignment/control right before we build ASI. In order to get alignment/control right, we need many trained experts working on alignment times a long enough time working on alignment.
Which is more important? In terms of raw numbers, we believe that a longer time is more important than the number of trained experts:
Alignment work is a bit more forgiving than having babies, and more people might work faster. There is an innovative process to it, and sometimes twice the number of innovative people are twice as likely to stumble across a new idea. Our very rough estimate is this:
A Spherical Cow Approximation
The total alignment progress A can be very roughly approximated as
A=√N∫T0fT(t)dt
where T is the duration, N are the trained experts working on alignment, and fT(t) is how productive alignment work is, given the level of AI capabilities at time t.
If you don't like integrals we can further approximate it as A=T√N
Regulation
Regulating and pausing AI increases T, and will also increase N because new people working on alignment can become trained experts. If regulating and pausing AI manages to delay ASI to take twice as long, both T and N might double, making alignment progress A be 2√2 times higher. Regulation and pausing AI may slow down capabilities progress more near the beginning than the end.[1] This means fT(t) might be lower on average, and A might increase by less than 2√2.
Funding
If asking for funding manages to double AI alignment funding, we might have twice as many trained experts working on alignment, making A only √2 times higher, and maybe a bit less.
That sounds like we should focus more on AI regulation/pausing, right? Not necessarily! The current AI safety spending is between $0.1 and $0.2 billion/year. The current AI capabilities spending is far more—four big tech companies are spending $235 billion/year on infrastructure that's mostly for AI.[2] Our rough guess is the US spends $300 billion/year in total on AI. The spending is increasing rapidly.[3]
Regulating/pausing AI to give us twice as much time, may require delaying the progress of these companies by 10 years and cost them $5000 billion in expected value. Of course the survival of humanity is worth far more than that, but these companies do not believe in AI risk enough to accept this level of sacrifice. They are fighting regulation and they are so far winning. Getting this 2√2 increase in A (alignment progress) by regulating/pausing AI is not easy and requires yanking $5000 billion away from some very powerful stakeholders. It further requires both the US and China to let go of the AI race. Americans who cannot tolerate the other party winning the election might never be convinced to tolerate the other country winning the race to ASI. China's handling of territorial disputes and protests does not paint a picture of compromise and wistful acceptance any better than the US election.
What about getting a 2√2 increase in A by increasing AI alignment spending? This requires increasing the current $0.2 billion/year by 8 times, to $1.6 billion/year. Given that the US military budget is $800 billion/year, we feel this isn't an impossibly big ask. This is what our open letter was about.
One might argue that AI alignment spending will be higher anyways near the end, when fT(t) is the highest. However, increasing it now may raise the Overton window for AI alignment spending, such that near the end it will still be higher. It also builds expertise now which will be available near the end.
See also: AI alignment researchers don't (seem to) stack by So8res:
Avoid building ASI forever
Surviving without AI alignment requires luck, or the indefinite prevention of ASI.
To truly avoid ASI forever, we'll need a lot more progress in world peace. As technology develops and develops over time, even impoverished countries like North Korea become capable of building things that only the most technologically and economically powerful countries could build a century ago. Many of the cheap electronics in a thrift store's dumpster are many times more powerful than the largest supercomputers in the world not too long ago. Preventing ASI forever may require all world leaders, even the ones in theocracies, to believe the risk of building ASI is greater than the risk of not building ASI (which depends on their individual circumstances). It seems very hard to convince all world leaders of this, since we have not convinced even one world leader to make serious sacrifices over AI risk.
It may be possible, but we should not focus all our efforts on this outcome.
Conclusion
Of course the AI alignment community can afford to argue for both funding and time.
The AI alignment community haven't yet tried open letters like our Statement on AI Inconsistency which argue for nontrivial amounts of funding relative to the military budget. It doesn't hurt to try this approach at the same time.
We speculate that when AI race pressures heat up near the end, there may be some speed up. “Springy” AI regulations might theoretically break and unleash sudden capability jumps.
https://io-fund.com/artificial-intelligence/ai-platforms/big-tech-battles-on-ai-heres-the-winner
and
https://www.forbes.com/sites/bethkindig/2024/11/14/ai-spending-to-exceed-a-quarter-trillion-next-year/
forecasts $235 billion and $240 billion for 2024.
See the graph, again in https://www.forbes.com/sites/bethkindig/2024/11/14/ai-spending-to-exceed-a-quarter-trillion-next-year/