Honestly? If that is the case just act as if you have some painless terminal disease and you have one or two years to live. Do a bunch of things you've always wanted to do, maybe try a bunch of LSD to see what that's about, skydive a bit, make peace with your parents, etc.
At the one year mark I don't think the overton window contains any actions that would actually be effective at preventing AGI. Which leaves us with morally odious solutions like:
As you see, the intersection between the set of pleasant solutions and the set effective solutions is empty if we're at the T-1year mark. All the effective solutions that I can see involve killing lots of people for the belief that AI will be dangerous, which means that you darn better have an unshakeable degree of confidence in the idea, which I don't have.
I think there are pleasant and potentially effective measures.
Offer a free vacation to some top AI experts.
Label decaf coffee as normal and give it to the lab.
DDos stack overflow.
Try to figure out the best possible defense-in-depth strategy.
In other words, there's a whole bunch of different proposals for safety. I think it'd be worth thinking about what would be the optimal set to stack on top of each other without them interfering or requiring unrealistic amounts of processing. It'd also be worth thinking about a more minimalistic set that would be more likely to be implemented.
Maybe create a Slack or Discord focused specifically focused on the imminent threat as it'd seem valuable to be able to have these discussions without being distracted by discussions that wouldn't be useful for the immediate crisis.
At a certain point the best strategy becomes physically preventing AI organizations from developing AI, somehow. You could do this by an appeal to governments, or we could pour lots of money into the new EA cause area of taking over the computation power supply chain.
I am not an AI safety researcher; more of a terrified spectator monitoring LessWrong for updates about the existential risk of unnaligned AGI (thanks a bunch HPMOR). That said, if it was a year away, I would jump into action. My initial thought would be to put almost all my net worth into a public awareness campaign. If we can cause enough trepidation in the general public, it's possible we could delay the emergence of AGI by a few weeks or months. My goal is not to solve alignment, rather to prod AI researchers to implement basic safety concerns that might reduce S-Risk by 1 or 2 percent. Then... think deeply about whether I want to be alive for the most Interesting Time in human history.
I'd try to survive the year, on an off chance that the AGI will be able to solve aging/uploads and let humans live for as long as they want. Sadly, this is not an option for a 2+ years horizon, as the pandemic demonstrated so amply.
Suppose AGI was very likely to arrive first around a year from now, with multiple projects close, but one a few months ahead of the others, and suppose that the AI safety community agreed that this was the case. What should our community do?
How would you answer differently for AGI in 4 months from now? 2 years from now? 5 years from now? 10 years from now? 20 years from now?
Some potential subquestions:
My motivations for this question are to get people to generate options for very short timelines and to get an idea of our progress so far.