Benito comments on CEV: a utilitarian critique - Less Wrong

25 Post author: Pablo_Stafforini 26 January 2013 04:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Benito 27 January 2013 10:28:00PM 0 points [-]

Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we'd want them to have good lives or not (and I think it would see we do).

But chickens don't (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we'd be left with some important human values, but also a lot of chicken feed.

Comment author: Utilitarian 28 January 2013 08:03:20AM 4 points [-]

Thanks, Benito. Do we know that we shouldn't have a lot of chicken feed? My point in asking this is just that we're baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers -- I want to bake in my answers -- but I'm just highlighting that it's not obvious that the set of human minds is the right one to extrapolate.

BTW, I think the "brain reward pathways" between humans and chickens aren't that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.

Comment author: Adriano_Mannino 28 January 2013 12:05:50PM 7 points [-]

As a matter of fact, I will of necessity treat them as I want to treat them. But I should of course treat them (and it would be good) to treat them as they want to be treated or as I'd want to be treated in their place.

What makes the values of individual humans important? What makes their frustration a bad thing? It seems that we can basically either hold that not getting what one (really) wants is bad tout court; or we can attempt a further reduction and only incorporate what feels bad/good when we (don't) get it.

In both cases, the focus on human minds is an arbitrary and irrational bias. For to the extent that non-human minds have equally strong wants or experience equally bad/good feelings when they (don't) get what they want, their values (or the values that they can instantiate) are no less important than the values of humans.