We don't necessarily see huge investments in AI safety yet, but this will probably change in time, as we begin to see more AIs that get out of control and cause problems on a local scale.
Once we see an out of control AI it's to late to do AI safety. Given current computer security the AI could hack itself into every computer in the world and resist easy shutdown.
When it comes to low probability high impact events waiting for small problem to cause awareness of the issue is just dangerous.
As we begin seeing robots/computers that are more human-like, people will take the possibility of AGIs getting out of control more seriously. These things will be major news stories worldwide, people will hold natural-security summits about them, etc. I would assume the US military is already looking into this topic at least a little bit behind closed doors.
There will probably be lots of not-quite-superhuman AIs / AGIs that cause havoc along the road to the first superhuman ones. Yes, it's possible that FOOM will take us from roughly a level like where we ...
In a recent essay, Brian Tomasik argues that meme-spreading has higher expected utility than x-risk reduction. His analysis assumes a classical utilitarian ethic, but it may be generalizable to other value systems. Here's the summary: