This post was rejected for the following reason(s):

  • Not addressing relevant prior discussion. Your post doesn't address or build upon relevant previous discussion of its topic that much of the LessWrong audience is already familiar with. If you're not sure where to find this discussion, feel free to ask in monthly open threads (general one, one for AI). Another form of this is writing a post arguing against a position, but not being clear about who exactly is being argued against, e.g., not linking to anything prior. Linking to existing posts on LessWrong is a great way to show that you are familiar/responding to prior discussion. If you're curious about a topic, try Search or look at our Concepts page.

There are other huge threats we face. Nuclear war, climate change, biological weapons, digital authoritarianism etcetera. AI could help us address these risks. You would think AI would increase risk of digital authortairanism. However, AI  powered robot soldiers could help contain authoritarianism by assisting in the Ukraine war. Less authoritarianism now means less digital authoritarianism in the future. 

It seems to me the threat of not advancing AI is much clearer than the threat of advancing it. 

New Answer
New Comment