SarahC comments on Optimal Philanthropy for Human Beings - Less Wrong

36 Post author: lukeprog 25 July 2011 07:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 26 July 2011 04:09:07PM 3 points [-]

The main claim that needs to be evaluated is "AI is an existential risk," and the various hypotheses that would imply that it is.

If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I'm not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn't making much progress. Pretty low priority.

Comment author: komponisto 10 August 2011 02:40:59PM 2 points [-]

Are you considering other effects SIAI might have, besides those directly related to its primary purpose?

In my opinion, Eliezer's rationality outreach efforts alone are enough to justify its existence. (And I'm not sure they would be as effective without the motivation of this "secret agenda".)