I've added a tag for object-level AI risk skepticism arguments. I've included my own post about deceptive alignment and Katja Grace's post about AI X-risk counterarguments. What other arguments should be tagged?
Quintin Pope
I just finished writing a post My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"
JakubK
Here's a list of arguments for AI safety being less important, although some of them are not object-level.
I've added a tag for object-level AI risk skepticism arguments. I've included my own post about deceptive alignment and Katja Grace's post about AI X-risk counterarguments. What other arguments should be tagged?