timtyler comments on Existential Risk - Less Wrong

28 Post author: lukeprog 15 November 2011 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (108)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 15 November 2011 07:33:55PM *  3 points [-]

There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.

Maybe more seriously, it is assumed without further justification that preventing existential risk is an ethical good, because colonizing the galaxy would create positive value structures on a great scale. This is, of course, incomplete without taking into consideration that it can also create negative value structures on a great scale. Currently, the galaxy isn't filled with involuntarily existing suffering entities, except for planet earth (as far as we know). In the future, that may change, and it may partially be Stanislav Petrov's fault.

We'd better get this right, because it really is important. Leaving out half of the equation in an introduction article like this doesn't further that goal.

Comment author: timtyler 15 November 2011 08:20:46PM *  5 points [-]

It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk.

Some people hereabouts are concerned about some types of posthuman and "earth-originating intelligent life".