Eliezer says that there is no fire alarm for AGI and I agree that there’s no single event that would wake everyone up. On the other hand, there are a lot of possible events that could cause a significant shift in the Overton window or wake a lot of people. Since this seems strategically important, I spend ten minutes brainstorming a list. Feel free to dispute any of these or add further possibilities in the comments:
Global protests with massive media coverage
Protest of people with ML PhDs or current students
Experimental evidence of deceptive alignment
Scary demo
Accident from auto-GPT - most likely hacking
Self-driving cars being hacked
High prestige open letter
International conference on AI
High profile documentary
Movie that captures that public attention - not just any movie
Active push from prestigious ML figure (more than another Stuart Russel)
Active push from a thought leader - say Obama
Active push from a political leader - say UK/US prime minister
Passing the Turing test
UN or major government declaring a crisis
Formation of a government department in a major country focused on alignment
Statement by an official computing body
High profile critic publicly changing their stance
Billboards in Times Square (followed by media coverage)
A popular or heavily promoted book (similar levels of promotion to What We Owe The Future)
Note: Some of these things are not things people should intentionality cause to happen (Ie. An accident from auto-gpt). Of the things that might be worth trying to cause happen, they could backfire significantly if done badly. Think carefully before pursuing any of these. Even though timelines are short, you should be about to find a spare hour to think about how things could go wrong.
Eliezer says that there is no fire alarm for AGI and I agree that there’s no single event that would wake everyone up. On the other hand, there are a lot of possible events that could cause a significant shift in the Overton window or wake a lot of people. Since this seems strategically important, I spend ten minutes brainstorming a list. Feel free to dispute any of these or add further possibilities in the comments:
Note: Some of these things are not things people should intentionality cause to happen (Ie. An accident from auto-gpt). Of the things that might be worth trying to cause happen, they could backfire significantly if done badly. Think carefully before pursuing any of these. Even though timelines are short, you should be about to find a spare hour to think about how things could go wrong.