There are people looking at this.
The obvious answer to why it's not transparently been a high priority is:
That said, it's nonetheless highly unlikely that the status-quo/default situation is the ideal - so it does seem probable that some form of broader communication is desirable.
But it's usually much easier to make things worse than to make them better - so I'd want most effort to go into figuring out the mechanics of a potentially useful plan. Most interventions don't achieve what was intended - even the plausibly helpful ones (for instance, one might try recruiting John Carmack to work on AI safety [this strikes me as a good idea, hindsight notwithstanding], only to get him interested enough that he starts up an AGI company a few years later).
I understand that it must be frustrating for anyone who takes this very seriously and believes they cannot engage in research themselves (though I'd suggest asking how they can help). However, the goal needs to be to actually improve the situation, rather than to do something that feels good. The latter impulse is much simpler to satisfy than the former.
To find a communication strategy that actually improves the situation, we need careful analysis.
With that in mind, if you want to start useful discussion on this, I'd suggest outlining:
I don't think this is easy (I have no good plan). I do think it's required.
Increasing awareness increases resources through virtue of sheer volume. The more people hear about AI safety, the more likely someone resourceful and amenable hears about AI safety.
This is a good sentiment, but 'resource gathering' is an instrumentally convergent strategy. No matter what researchers end up deciding we should do, it'll probably be best done with money and status on our side.
Politicisation is not a failure mode, it's an optimistic outcome. Politicized issues get money. Politicized issues get studied. Other failure modes might be tha
for instance, one might try recruiting John Carmack to work on AI safety [this strikes me as a good idea, hindsight notwithstanding], only to get him interested enough that he starts up an AGI company a few years later
Is this a reference to his current personal project to work on AGI?
Edit: reading a bit more about him, I suspect if he ever got interested in alignment work he'd likely prefer working on Christiano-style stuff than MIRI-style stuff. For instance (re: metaverse):
...The idea of the metaverse, Carmack says, can be "a honeypot trap for 'archit
The Obama administration had great success at reducing Mercury pollution and little success at reducing CO2 pollution. Most of the action to reduce Mercury pollution happened outside of public awareness.
The great public awareness of CO2 pollution made it a highly political topic where nothing gets done on the political level. Even worse, most of the people who are heavily engaged in the topic on both sides are not thinking clearly about the issue but are mind-killed.
The ability to think clearly is even more important for AI safety than it is for climate change.
CO2 was not brought to public awareness arbitrarily. CO2 came to public awareness because regulating it without negatively impacting a lot of businesses and people is impossible.
Controversial -> Public Awareness
Not
Public Awareness -> Controversial
This is what we are doing with the Existential Risk Observatory. I agree with many of the things you're saying.
I think it's helpful to debunk a few myths:
- No one has communicated AI xrisk to the public debate yet. In reality, Elon Musk, Nick Bostrom, Stephen Hawking, Sam Harris, Stuart Russell, Toby Ord and recently William MacAskill have all sought publicity with this message. There are op-eds in the NY Times, Economist articles, YouTube videos and Ted talks with millions of views, a CNN item, at least a dozen books (including for a general audience), and a documentary (incomplete overview here). AI xrisk communication to the public debate is not new. However, the public debate is a big place and when compared to e.g. climate, coverage of AI xrisk is still minimal (perhaps a few articles per year in a typical news outlet, compared to dozens to hundreds for climate).
- AI xrisk communication to the public debate is easy, we could just 'tell people'. If you actually try this, you will quickly find out public communication, especially of this message, is a craft. If you make a poor quality contribution or your network is insufficient, it will probably never make it out. If your message does make it out, it will probably not be convincing enough to make most media consumers believe AI xrisk is an actual thing. It's not necessarily easier to convince a member of the general public of this idea than it is to convince an expert, and we can see from the case of Carmack and many others how difficult this can be. Arguably, LW and EA are the only places where this has really been successful so far.
- AI xrisk communication is really dangerous and it's easy to irreversibly break things. As can easily be seen from the wealth of existing communication and how little that did, it's really hard to move the needle significantly on the topic. That cuts both ways: it's, fortunately, not easy to really break something with your first book or article, simply because it won't convince enough people. That means there's some room to experiment. However, it's also, unfortunately, fairly hard to make significant progress here without a lot of time, effort, and budget.
We think communication to the public debate is net positive and important, and a lot of people could work on this who could not work on AI alignment. There is an increasing amount of funding available as well. Also, despite the existing corpus, the area is still neglected (we are to our knowledge the only institute that specifically aims to work on this issue).
If you want to work on this, we're always available for a chat to exchange views. EA is also starting to move in this direction, good to compare notes with them as well.
Campaigns for general "public awareness" seem less effective than communicating with particular groups of people since some groups of people are more influential than others for AGI risk. The "AGI Safety Communications Initiative" is a group of people thinking about effective communication.
In terms of telling your favorite streamer about AGI risk, the best approach depends on the person. Think about what arguments will make sense to them. Definitely check out "Resources I send to AI researchers about AI safety."
It seems like this might be a result of an aversion to bad press, but the truth is that bad press would be significantly better than what we have now. As far as I can see we have no press.
There has definitely been some critical press. Check out Steven Pinker in Popular Science (which Rob Miles responded to). Or perhaps this NYT Opinion piece by Melanie Mitchell (note there's also a debate between her and Stuart Russell). Also see Ted Chiang (Scott Alexander responded) and Daron Acemoglu (Scott Alexander responded again).
Has anyone done anything in this domain? Is there a public face for AI safety that we can promote? I wanted to pester my favorite streamer about looking into AI safety, but I don't know who I would refer him to.
It seems so obvious to me that this should be a priority, especially for anyone who cannot engage in research themselves.
It seems like this might be a result of an aversion to bad press, but the truth is that bad press would be significantly better than what we have now. As far as I can see we have no press.
This is disappointing.
Sometimes I seriously consider the possibility that you guys are all larping.