The taskforce represents a startup government mindset that makes me optimistic
I would say it's not just potentially a startup government mindset in the abstract but rather an attempt to repeat a specific, preexisting highly successful example of startup government, namely the UK's covid vaccine task force which was name checked in the original Foundation model task force announcement.
That was also fast-moving attempt to solve a novel problem that regular scientific institutions were doing badly at and it substantially beat expectations, and was run under an administration that has a lot of overlap with the current administration, with the major exception being a more stable and reasonable PM at the top (Sunak not Boris) and no Dominic Cummings involved.
We likely only get one shot at this. If the taskforce fails, there will probably not be another such effort.
Can you elaborate more on your intuition here? This isn't that obvious to me.
gw mostly has it right here. If this effort is seen as a failure it will be used as an example by the many opponents of such moves to defeat any future such efforts, and those who spent their political capital to make this version happen will not be in position to fight back. There are very few examples of things like this that are large getting second chances.
Also this is simply an unusually great opportunity, a lot of things have already gone right that we would have expected to go wrong. A second effort even if it happened would likely end up fake.
As someone who is definitely not a political expert (and not from or super familiar with the UK), my guess would be that you just can't muster up enough political capital or will to try again. Taxpayer money (in the US at least) seems highly scrutinized, you typically can't just fail with a lot of money and have no one say anything about it.
So then if the first try does fail, then it requires more political capital to push for allocating a bunch of money again, and failing again looks really bad for anyone who led or supported that effort. Politicians seem to care about career risk, and all this makes the risk associated with a second shot higher than the first.
I'd agree that this makes a second shot unlikely (including from other governments, if it fails spectacularly enough), if circumstances stay about the same. But circumstances will probably change, so IMO we might eventually get more such taskforces, just not soon.
Question, what level of experience are you looking for here? I assume not just people with the relevant competences to learn, but people who already have significant domain experience, given how narrow the time targets are?
From Zvi's linked Google form in the post:
We want to find people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. We're particularly interested in building out "safety infrastructure" and developing risk assessments that can inform policymakers and spur global coordination on AI safety. For example, this would include experience running evals for LLMs, experience with model pretraining, finetuning, or RL, and experience in technical research in the societal impacts of models. But we're open to hearing what should be done beyond this as well.
They mostly want people who are already skilled up and can hit the ground running, is my understanding, although it is always good to have options. The long term matters but the short term is necessary to get to the long term.
Beyond the people with the right qualifications to get directly involved right away e.g. via the form, are there "supporting role" tasks/efforts that interested individuals of different skillsets and locations could help out with? Baseline examples could be volunteering to do advocacy, translation, ops, making introductions, doing ad-hoc website/software development, summarizing/transcribing/editing audio/video, etc.? Is there a recommended discord/slack/other where this kind of support is being coordinated?
Not that I am aware of at this time. Government taskforces tend to be more formal than that. But I don't know for sure.
Thanks. Yeah, makes sense for official involvement to be pretty formal and restricted.
More in a "just in case someone reads this and has something to share" I'd like to extend the question to unofficial efforts others might be thinking about or coordinating around.
It would also be good if those who do get involved formally feel like there's enough outside interest to be worth their time to make informal requests for help, like "if you have 10h/week I'm looking for a volunteer research assistant to help me keep up with relevant papers/news leading up to the summit."
Are people aware that the British government will be ejected in a year's time, barring a miracle?
A year could do a lot of good (for example the summit and focusing it on not-kill-everyoneism)
But beyond a year it will depend on Labour not reverting to the mean and losing focus. They are probably very worried about AI's saying mean things to disadvantaged groups or displacing workers too quickly - a trap the taskforce hasn't fallen into. This lack of focus is not because Labour are useless but because they are simply not as unusually open to rationalist adjacent arguments as the Sunak (and Johnson) regimes.
A few months ago, Ian Hogarth wrote the Financial Times Op-Ed headlined “We must slow down the race to God-like AI.”
A few weeks ago, he was appointed head of the UK Foundation Model Taskforce, and given 100 million pounds to dedicate to AI safety, to universal acclaim. Soon there will also be a UK Global AI Summit.
He wrote an op-ed in The Times asking everyone for their help, with accompanying Twitter thread. Based on a combination of sources, I am confident that this effort has strong backing for the time being although that is always fragile, and that it is aimed squarely at the real target of extinction risk from AI, with a strong understanding of what it would mean to have an impact on that.
Once again: The real work begins now.
The UK Taskforce will need many things in order to succeed. It will face opposition within and outside the government, and internationally. There is a narrow window until the AI summit to hit the ground running and establish capability and credibility.
The taskforce represents a startup government mindset that makes me optimistic, and that seems like the best hope for making government get things done again, including on other vital causes that are not AI, and not only in the UK. We likely only get one shot at this. If the taskforce fails, there will probably not be another such effort.
Right now, the main bottleneck is that the taskforce is talent constrained. There is an urgent need to scale up rapidly with people who can hit the ground running and allow the taskforce to orient.
If you are in position to help, then with the possible exception of creating your own organization at scale, I believe this is the highest leverage opportunity currently available.
To reach out and see if you can help, you can fill out this Google Form here.