Maybe they think, all this AI stuff is just tools?
Maybe they're more worried about other things. (Right now it's easy to say Covid, the economy, stuff like that. Compare how seriously global warming is taken.)
Maybe it's not someone's job.
Two more responses to add to your list.
Governments are doing things. Eg, China: Deciphering China's AI dream, China aims to become world leader in AI. US: Palantir wins US Army AI contract worth $91 million.
Governments may be doing things that we can't and wouldn't be able to see, eg espionage, counter-espionage, secret projects, secret plans, etc.
What would you expect governments to be doing, and how would we know? (if you'd expect unhelpful actions, and don't want to give them ideas, that's fine too)
Maybe you would file this under your conception of governments not being competent in this area, but I wonder if part of this could be governments thinking that this is exciting new tech, and new tech that takes off often makes lots of money, so a government might assume that if the company that strikes it big in this area is based in its country that could be economically great. A big new business, based in their country, paying lots of tech salaries, income taxes rising, spending in the economy up, maybe even able to get some corporation taxes if they've not structured their business to put that offshore. Might sound positively attractive to a government that isn't looking for the downside.
Epistemic Status: Quickly written, uncertain. I'm fairly sure there's very little in terms of the public or government concerned about AGI claims, but I'm sure there's a lot I'm missing. I'm not at all an expert on government or policy and AI.
This was originally posted to Facebook here, where it had some discussion. Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.
Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence).
As written in a lot of work on AGI (See Superintelligence, as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities.
And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies.
If some company were to make a claim like,
or,
I'd expect that to draw attention.
But with AGI, crickets.
I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something.
You might think,
That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. Superintelligence was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find.
The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time.
Responses
In the comments to my post, a few points were made, some of which I was roughly expecting. Points include:
My quick responses would be: