XiXiDu comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread.

Comment author: XiXiDu 04 November 2010 11:42:36AM *  2 points [-]

I added a footnote to the post:

  • Potential negative consequences [3] of slowing down research on artificial intelligence (a risks and benefits analysis).

(3) Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe

Comment author: NancyLebovitz 04 November 2010 11:46:45AM 0 points [-]

I was thinking about how the existential risks affect each other-- for example, a real world war might either destroy so much that high tech risks become less likely for a while, or lead to research which results in high tech disaster.

And we may get home build-a-virus kits before AI is developed, even if we aren't cautious about AI.