fubarobfusco comments on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
The connection between AI and rationality could be made stronger.
Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.
But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.