ArisKatsaris comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 17 March 2012 12:39:58PM *  2 points [-]

According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.

So what? I'm not even sure that Eliezer himself considers uFAI the most likely source of extinction. It's just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.

As a point of note, I myself didn't place uFAI as the most likely existential risk in that survey. That doesn't mean I share your attitude.