ArisKatsaris comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 17 March 2012 12:25:50PM *  1 point [-]

(Addendum to my other comment)

Here is why I believe that reading the Sequences might not be worth the effort:

1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.

2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:

  • Robin Hanson
  • Katja Grace (who has been a visiting fellow)
  • John Baez (who interviews Eliezer Yudkowsky)
  • Holden Karnofsky
  • Ben Goertzel
Comment author: ArisKatsaris 17 March 2012 12:39:58PM *  2 points [-]

According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.

So what? I'm not even sure that Eliezer himself considers uFAI the most likely source of extinction. It's just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.

As a point of note, I myself didn't place uFAI as the most likely existential risk in that survey. That doesn't mean I share your attitude.