ata comments on Other Existential Risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were "marketing scam[s]" taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people's doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about "evil robots" and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!
(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There's no law that says the world can't end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)