Melanie Mitchell called the article "excellent". It also got a bit of discussion on HN.

(This is an FYI. I don't necessarily approve or endorse the links I post.)

New Comment
5 comments, sorted by Click to highlight new comments since:

My brief complaints about that article (from twitter here):

My complaints about that essay would be: (1) talking about factors that might bias people’s p(doom) too high but not mentioning factors that might bias people’s p(doom) too low; (2) implicitly treating “p(doom) is unknowable” as evidence for “p(doom) is very low”; (3) Dismissing the possibility of object-level arguments. E.g. for (2), they say “govts should adopt policies that are compatible with a range of possible estimates of AI risk, and are on balance helpful even if the risk is negligible”. Why not “…even if the risk is high”? I agree that the essay has many good parts, and stands head-and-shoulders above much of the drivel that comprises the current discourse 😛

(…and then downthread there’s more elaboration on (2).)

This is an unusually well-written piece from AI doom skeptics. It deserves a response/rebuttal.

I note that one central premise is "none of these estimates is made using any special knowledge". I found this to be a crucial and interesting claim for evaluating AGI x-risk arguments. I think this is actually true for many people making p(doom) estimates. But it is definitely not true of everyone.

It's as though we're asking people to predict a recession without bothering to ask if they have any education in economics, and how much time they've spent thinking about factors that lead to recessions.

What specialized knowledge is relevant? I suggest that it requires two types of expertise: how current AI works; and how a mind that could create an x-risk could work.

I listened to this expecting it to consist solely of the fallacy "if people disagree on risks dramatically we should act as though the risk is zero".

It does pretty much draw that very wrong conclusion.

However, a lot of the other critique of various AI x-risk models, and how we aggregate them, is surprisingly good. I fully agree that just choosing reference classes from intuitions is worth nothing as a predictive model. Choosing them from careful analysis, however, gets you closer.

But in the end, having good gears-level models of how AGI would be built and how humans would succeed or fail at it is the qualification for predicting AGI x-risk, and there seem to be few human beings who've spent more than trivial time on the necessary constellation of questions. We should be asking for time-on-task along with each p(doom) estimate, and for time on which tasks.

Machine Learning Street Talk interview of one of the authors: 

I do like the replies in the comments saying "Sure, but if you're saying 'It's hard to predict, so we should act like the probability is low', you're still implicitly claiming we can get reliable information somehow, otherwise you'd be saying 'It's hard to predict, so we should act like the probability is moderate.'"

But I think we could dig a little deeper down by rejecting some of the absolutism ("there are no reference classes, it's impossible to do deduction") and having a more nuanced discussion about what society in general should do when there are only tenuous reference classes, and when it's hard to do deduction and even experts disagree.

I don't really have a good solution, and if I did it probably wouldn't fit in a comment. But it probably still involves all sides sharing their views with policymakers, and governments sometimes choosing to act under uncertainty rather than always betting on the status quo holding. (The atomic bomb seems like a good anti-betting-on-status-quo example, and overpopulation doomerism seems like a good pro-betting-on-status-quo example.)