Benito comments on CEV: a utilitarian critique - Less Wrong

25 Post author: Pablo_Stafforini 26 January 2013 04:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Benito 26 January 2013 11:50:44PM -1 points [-]

Okay.

Either, if we all knew more, thought faster, understood ourselves better, we would decide to farm animals, or we wouldn't. For people to be so fundamentally different that there would be disagreement, they would need massively complex adaptations / mutations, which are vastly improbable. Even if someone sits down, and thinks long and hard about an ethical dilemma, they can very easily be wrong. To say that an AI could not coherently extrapolate our volition, is to say we're so fundamentally unlike that we would not choose to work for a common good if we had the choice.

Comment author: Adriano_Mannino 28 January 2013 01:04:57PM 7 points [-]

But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.

I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.

Comment author: Benito 28 January 2013 06:03:09PM 1 point [-]

What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

This sounds a bit like religious people saying "But what if it turns out that there is no morality? That would be bad!". What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn't care about anything else.

If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call 'right'.

The only non-arbitrary "we" is the community of all minds/consciousnesses.

This is what you value, what you chose. Don't lose sight of invisible frameworks. If we're including all decision procedures, then why not computers too? This is part of the human intuition of 'fairness' and 'equality' too. Not the hamster's one.

Comment author: Utilitarian 29 January 2013 08:31:31AM 3 points [-]

This is what you value, what you chose.

Yes. We want utilitarianism. You want CEV. It's not clear where to go from there.

Not the hamster's one.

FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.

Comment author: Adriano_Mannino 30 January 2013 12:10:31PM *  1 point [-]

It would indeed be bad (objectively, for the world) if, deep down, we did not really care about the well-being of all sentience. By definition, there will then be some sentience that ends up having a worse life than it could have had. This is an objective matter.

Yes, it is what I value, but not just. The thing is that if you're a non-utilitarian, your values don't correspond to the value/s there is/are in the world. If we're working for CEV, we seem to be engaged in an attempt to make our values correspond to the value/s in the world. If so, we're probably wrong with CEV.