ArisKatsaris comments on Risks from AI and Charitable Giving - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
(Addendum to my other comment)
Here is why I believe that reading the Sequences might not be worth the effort:
1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:
So what? I'm not even sure that Eliezer himself considers uFAI the most likely source of extinction. It's just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.
As a point of note, I myself didn't place uFAI as the most likely existential risk in that survey. That doesn't mean I share your attitude.