I'm curious as to what y'all think of the points made in this post against AI risk from 2 AI researchers at Princeton. If you have reason to think any points made are particularly good or bad, write it in the comments below! 

New Comment
5 comments, sorted by Click to highlight new comments since:

This was already referenced here: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to

I think it would be better to comment there instead of here.

That post was completely ignored here: 0 comments and 0 upvotes during the first 24 hours.

I don't know if it's the timing or the content.

On HN, which is where I saw it, it was ranked #1 briefly, as I recall. But then it got "flagged", apparently. 

Good point! 

This post was worth looking at, although its central argument is deeply flawed.

I commented on the other linkpost: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to?commentId=fBsrSQBgCLZd4zJHj

The post isn't even Against AI Doom. It is against the idea that you can communicate a high confidence in AI doom to policy makers.