pleeppleep comments on CEV: a utilitarian critique - Less Wrong

25 Post author: Pablo_Stafforini 26 January 2013 04:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: pleeppleep 27 January 2013 12:14:39AM *  2 points [-]

This is a question about utilitarianism, not AI, but can anyone explain (or provide a link to an explanation) of why reducing the total suffering in the world is considered so important? I thought that we pretty much agreed that morality is based on moral intuitions and it seems pretty counterintuitive to value the states of mind of people too numerous to sympathize with as highly as people here do.

It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person, rather than something your conscience actually motivates you to do, but people here are usually pretty averse to conscious signaling, so I'm not sure that works as an explanation. I'm certain this has been covered elsewhere, but I haven't seen it.

Comment author: Pablo_Stafforini 27 January 2013 12:49:17AM *  6 points [-]

When I become directly acquainted with an episode of intense suffering, I come to see that this is a state of affairs that ought not to exist. My empathy may be limited, but I don't need to empathize with others to recognize that, when they suffer, their suffering ought to be relieved too.

I don't pretend to speak on behalf of all other hedonistic utilitarians, however. Brian himself would probably disagree with my answer. He would instead reply that he "just cares" about other people's suffering, and that's that.

Comment author: vallinder 27 January 2013 11:54:15AM 1 point [-]

Knowing that you've abandoned moral realism, how would you respond to someone making an analogous argument about preferences or duties? For instance, "When a preference of mine is frustrated, I come to see this as a state of affairs that ought not to exist," or "When someone violates a duty, I come to see this as a state of affairs that ought not to exist." Granted, the acquaintance may not be as direct as in the case of intense suffering. But is that enough to single out pleasure and suffering?

Comment author: Utilitarian 27 January 2013 12:01:28PM 4 points [-]

Preventing suffering is what I care about, and I'm going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it's not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.

Comment author: vallinder 27 January 2013 12:05:05PM 2 points [-]

Thanks, Brian. I know this is your position, I'm wondering if it's benthamite's as well.

Comment author: aelephant 27 January 2013 11:59:21PM 1 point [-]

You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.

It seems plausible to me that suffering could serve some useful purpose & eliminating it (or seeking to eliminate it) might have horrific consequences.

Comment author: Jabberslythe 28 January 2013 08:58:33AM *  2 points [-]

You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.

Almost all hedonistic utilitarians are concerned with maximizing happiness as well as minimizing suffering, including Brian. The reason that he talks about suffering so much is because, it is most people rank a unit of suffering as, say a -3 experience and a unit of suffering as, say, a -1 experience. And he thinks that there is much more suffering than happiness in the world and that it easier to prevent it.

(Sorry if I got any of this wrong Brian)

Comment author: Utilitarian 31 January 2013 06:30:21AM 0 points [-]

Thanks, Jabberslythe! You got it mostly correct. :)

The one thing I would add is that I personally think people don't usually take suffering seriously enough -- at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.

Comment author: Adriano_Mannino 28 January 2013 01:25:05PM 5 points [-]

Why are you so certain that a population of 0 would be a problem? In fact, there'd be no one for whom it would (could!) be a problem; no one whose values could rate that state of affairs as bad. Would it be a problem if no form of consciousness had ever come into existence? Why would that be problematic?

Comment author: DanArmak 28 January 2013 05:45:52PM 1 point [-]

Cooperation for mutual benefit. Potential alliance building. Signalling of reliability, benevolence, and capability. It's often beneficial to adopt a general policy of helping strangers whenever the personal price is low enough. And (therefore) the human mind is such that people mostly enjoy helping others as long as it's not too strenuous.

Comment author: Jabberslythe 28 January 2013 08:48:09AM 1 point [-]

It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person

I am not sure that the hedonistic utilitarian agenda is high status. The most plausible cynical/psychological critique of the hedonistic utilitarian agenda, is that they are too worried about ethical consistency and about coherently extrapolating a simple principle from their values.

Comment author: drethelin 27 January 2013 10:19:53AM 1 point [-]

I'm not strongly emotionally motivated to reduce suffering in general but I realize that my and other instances of suffering are examples of suffering in general so I think it's a good policy to try to reduce world-suck. This is reasonably approximated by saying I would like to reduce unhappiness or increase happiness or some such thing.

Comment author: Adriano_Mannino 28 January 2013 01:16:27PM *  1 point [-]

What is it that you are strongly motivated to do in this world, then? Are you strongly motivated to reduce/prevent drethelin_tomorrow's suffering, for instance?