Major news outlets have published articles about the Future of Life Institute's Open Letter. Time Magazine published an opinion piece by Elizer Yudkowsky. Lex Friendman featured EY on his podcast. Several US Members of Congress have spoken about the risks of AI. And a Fox News reporter asked a what the US President is doing to combat AI x-risk at a White House Press Conference.
Starting an Open Thread to discuss this, and how best to capitalize on this sudden attention.
Links:
WH Press Conference: https://twitter.com/therecount/status/1641526864626720774
Time Magazine: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
FLI Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
The increased public attention towards AI Safety risk is probably a good thing. But, when stuff like this is getting lumped in with the rest of AI Safety, it feels like the public-facing slow-down-AI movement is going to be a grab-bag of AI Safety, AI Ethics, and AI... privacy(?). As such, I'm afraid that the public discourse will devolve into "Woah-there-Slow-AI" and "GOGOGOGO" tribal warfare; from the track record of American politics, this seems likely - maybe even inevitable?
More importantly, though, what I'm afraid of is that this will translate into adversarial relations between AI Capabilities organizations and AI Safety orgs (more generally, that capabilities teams will become less inclined to incorporate safety concerns in their products).
I'm not actually in an AI organization, so if someone is in one and has thoughts on this dynamic happening/not happening, I would love to hear.
Yeah, since the public currently doesn't have much of an opinion on it, trying to get the correct information out seems critical. I fear some absolutely useless legislation will get passed, and everyone will just forget about it once the shock-value of GPT wears off.