Overall, a headline that seems counterproductive and needlessly divisive.
Probably the understatement of the decade, this article is literally an "order" from Official Authority to stop talking about what I believe is literally the most important thing in the world. I guess this is not literally the headline that would maximally make me lose respect for Nature... but it's pretty close.
This article is a pure appeal to authority. It contains no arguments at all, it only exists as a social signal that Respectable Scientists should steer away from talk of AI existential risk.
The AI risk debate is now no more about any actual arguments, it's now about slinging around political capital and scientific prestige. It has become political in nature.
Yep, that's the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don't even try to state why it isn't a risk, and instead appeal to social authority, and while social authority is evidence, it's too easy to filter that evidence a lot to be useful.
To be frank, I don't blame a lot of the AI risk people for not being convinced that we aren't doomed, even though reality doesn't grade on a curve, the unsoundness of the current arguments against doom don't help, and it is in fact bad that my side keeps doing this.
Yeah I travel and ask randos on the street and they agree AI is about to kill us all. Does he travel?
I strongly agree. The basic argument Yud laid out is very convincing to randos who listen. Too convincing honestly. A rando doesn't need an in-depth mathematical explanation to understand how incredibly likely it is that AI will turn the world into glass.
My go to is:
That's really it. I also know a couple basic counters to the most common arguments people bring up: government regulation, friendly AI being made first, AI wouldn't necessarily want to hurt us, etc. Most people are convinced and unfortunately look disheartened.
I actually like the headline. "Stop talking about tomorrow’s AI doomsday" sort of admits that the doomsday is happening tomorrow, and comes out and says "yes, tomorrow is too far in the future for me to want to think about it, regardless of severity. I am a type of addict."
I think you missed one of the valid points being made here (possibly tacitly), roughly, the general public should be focused on the issues of infra-intelligent AI because that's the part of the discussion that public engagement could actually benefit. I don't know if infra issues people know why it is this way, but they might have a genuinely good sense for where the good discourse is or isn't, and I think alignment strategists would tend to agree with them about that, I'm starting to wonder if we're kind of idiots for talking about it.
Like, AGI researchers: "Look at how good and important the alignment research I'm doing is".
Non AGI researchers, just trying to live their lives: "I don't understand your research or why it's important, and I'm not hearing a case as to why it would even help anyone for me to understand it, so no, I don't want to look at it."
AGI researchers: ">:|"
I think your point is interesting and I agree with it, but I don't think Nature are only addressing the general public. To me, it seems like they're addressing researchers and policymakers and telling them what they ought to focus on as well.
Overall, a headline that seems counterproductive and needlessly divisive.
I worry very much that coverage like this has the potential to bring political polarization to AI risk and it would be extremely damaging for the prospect of regulation if one side of the US Congress/Senate decided AI risk was something only their outgroup is concerned about, for nefarious reasons.
but in the spirit of charity, here are perhaps the strongest points of a weak article:
and
and
This would be great if ethical or institutional review boards were willing to restrict research that might be dangerous, but it would require a substantial change in their approach to regulating AI research.
Should people worried about AI existential risk be trying to create resources for IRBs to recognize harmful AI research?
Some ominous commentary from Tyler Cowen:
I don't really know what he is talking about because it does not seem like we're losing the debate right now.