Eneasz comments on Virtue Ethics for Consequentialists - Less Wrong

33 Post author: Will_Newsome 04 June 2010 04:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (178)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eneasz 09 June 2010 10:14:32PM 1 point [-]

The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world).

However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.

Comment deleted 10 June 2010 11:55:40AM [-]
Comment author: Eneasz 10 June 2010 03:15:58PM 1 point [-]

I consider the "P" in LCPW to be important. If the agents in question are post-human then it's too late to worry about saving the world. If you still have to save the world, then standard human failure modes do apply.