Perplexed comments on Agents of No Moral Value: Constrained Cognition? - Less Wrong

6 Post author: Vladimir_Nesov 21 November 2010 04:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: Perplexed 21 November 2010 08:00:25PM 1 point [-]

the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).

Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.

A better approach, IMHO, is to stipulate that the published payoff matrix already 'factors in' any benevolence due to the other agent by reason of ethical considerations.

One objection to my approach might be that for a true utilitarian, there is no possible assignment of selfish utilities to outcomes that would result in the published payoff matrix as the post-ethical-reflection result. But, to my mind, this is just one more argument against utilitarianism as a coherent ethical theory.

Comment author: Vladimir_Nesov 21 November 2010 08:15:40PM *  0 points [-]

Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.

All people are not indifferent. (And not meant to qualify.)