You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Questions for Moral Realists - Less Wrong Discussion

4 Post author: peter_hurford 13 February 2013 05:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 13 February 2013 02:29:18PM *  1 point [-]

The arguments around CEV suggest that these moral theories ought to converge.

In the practical sense, only something in particular can be done with the world, so if "morality" is taken to refer to the goal given to a world-optimizing AI, it should be something specific by construction. If we take "morality" as given by the data of individual people, we can define personal moralities for each of them that would almost certainly be somewhat different from each other. Given the task of arriving at a single goal for the world, it might prove useful to exploit the similarities between personal moralities, or to sidestep this concept altogether, but eventual "convergence" is more of a design criterion than a prediction. In a world that had both humans and pebblesorters in it, arriving at a single goal would still be an important problem, even though we wouldn't expect these goals to "naturally" converge under reflection.