fubarobfusco comments on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far - Less Wrong

5 Post author: Louie 27 December 2011 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 29 December 2011 05:16:08AM 0 points [-]

The connection between AI and rationality could be made stronger.

Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.

But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.