Comment author: Strange7 26 March 2010 01:06:56PM 1 point [-]

The classic one is euthanasia.

Comment author: woozle 27 March 2010 02:22:16AM 0 points [-]

Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)

I think this shows that there needs to be a term for pleasure/enjoyment in the formula...

...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.

Comment author: Morendil 26 March 2010 07:47:39AM 1 point [-]

Learning is a terminal value for me, which I hold irreducible to its instrumental advantages in contributing to my well-being.

Comment author: woozle 27 March 2010 02:07:58AM 0 points [-]

That seems related to what I was trying to get at with the placeholder-word "freedom" -- I was thinking of things like "freedom to explore" and "freedom to create new things" -- both of which seem highly related to "learning".

It looks like we're talking about two subtly different types of "terminal value", though: for society and for one's self. (Shall we call them "external" and "internal" TVs?)

I'm inclined to agree with your internal TV for "learning", but that doesn't mean that I would insist that a decision which prevented others from learning was necessarily wrong -- perhaps some people have no interest in learning (though I'm not going to be inviting them to my birthday party).

If a decision prevented learnophiles from learning, though, I would count that as "harm" or "suffering" --- and thus it would be against my external TVs.

Taking the thought a little further: I would be inclined to argue that unless an individual is clearly learnophobic, or it can be shown that too much learning could somehow damage them, then preventing learning in even neutral cases would also be harm -- because learning is part of what makes us human. I realize, though, that this argument is on rather thinner rational ground than my main argument, and I'm mainly presenting it as a means of establishing common emotional ground. Please ignore it if this bothers you.

Take-away point: My proposed universal external TV (prevention of suffering) defines {involuntary violation of internal TVs} as harm/suffering.

Hope that makes sense.

Comment author: mattnewport 26 March 2010 04:30:58AM 0 points [-]

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

So you want to modify your original statement:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

To something like: "I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle's definition of] suffering (which woozle can't actually define but knows it when he sees it)"?

Your proposal seems to be phrased as a descriptive rather than normative statement ('the ultimate terminal value of every rational, compassionate human is' rather than 'should be'). As a descriptive statement this seems factually false unless you define 'rational, compassionate human' as 'human who aims to minimize woozle's definition of suffering'. As a normative statement it is merely an opinion and one which I disagree with.

So I don't agree that minimizing suffering by any reasonable definition I can think of (I'm having to guess since you can't provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy - I've been known to shed a tear when watching a movie and to feel compassion for other human beings.

again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?

Second... many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values -- but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one's self. ...and (b) only supercedes (a) for people whose self-interest outweighs their integrity.

So everyone shares your self declared terminal value of minimizing suffering but many of them don't know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?

Comment author: woozle 27 March 2010 01:44:04AM 0 points [-]

Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?

But ok, a rephrase and expansion:

I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be "wrong" unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it's ok to use qualitative words like "significant" without defining them exactly?)

I intend it as a descriptive statement ("is"), and I have been asking for counterexamples: show me a situation in which the "right" decision increases the overall harm/suffering/discomfort of those affected.

I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.

Comment author: Strange7 26 March 2010 01:45:33AM 2 points [-]

Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?

Comment author: woozle 26 March 2010 12:54:24PM 0 points [-]

It's true that there would be no further suffering once the destruction was complete.

This is a bit of an abstract point to argue over, but I'll give it a go...

I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle -- but perhaps it, or something like it, needs to be included in order to avoid the "destroy everything instantly and painlessly" solution.

That said, I think it's more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?

Comment author: Jack 25 March 2010 11:48:15PM 1 point [-]

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely unnecessary suffering.

This is my fault. I don't mean multiculturalism or politcal pluralism. I really do mean pluralism about terminal values. By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent. Note that I'm not actually a particularist since I did give you moral principles. I would say that I am a value pluralist.

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

But I'm explicitly denying this. For example, I am a cosmopolitan. In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world. But this is totally antithetical to my terminal values. I would vastly prefer to spend political and economic capital to get rid of agricultural subsides in the developed world, liberalize as many immigration and trade laws as I can and test strategies for economic development. Whether or not the American working class has cheap health care really is quite insignificant to me by comparison.

Now, when I say I have a terminal value of fairness I really do mean it. I mean I would sacrifice utility or increase overall suffering in some circumstances in order to make the world more fair. I would do the same to make the world more free and the same to make the world more honest in some situations. I would do things that furthered the happiness of my friends and family but increased your suffering (nothing personal). I don't know what gives you reason to deny any of this.

I do not deny this, but I also do not believe they are being rational in those assignments. Why should the "morality" of a particular act matter in the slightest if it has been shown to be completely harmless?

Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!

Comment author: woozle 26 March 2010 11:30:03AM *  0 points [-]

By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my refinement of it.)

In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world.

Matt (I believe) misinterpreted me that way too. No, that is not what I said.

What I was trying to convey was that I thought I had a workable and practical principle by which poor Americans could be helped (redistribution of American wealth via mechanisms and rules yet to be worked out), while I don't have such a solution for the rest of the world [yet].

I tried to make it quite clear that I do care about the rest of the world; the fact that I don't yet have a solution for them (and am therefore not offering one) does not negate this.

I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don't need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.

(At a glance, I agree with your global policy position. I don't think it contradicts my own. I'm not talking about reallocation of existing expenditures -- foreign aid, tax revenues, etc. -- I'm talking about reallocating unused -- one might even use the word "hoarded" -- resources, via means socialistic, capitalistic, or whatever means seems best*.)

(*the definition of this slippery term comes back ultimately to what we're discussing here: "what is good?")

Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!

First of all, when I say "harm" or "suffering", I'm not talking about something like "punishing someone for bad behavior"; the idea behind doing that (whether correct or not) is that this ultimately benefits them somehow, and any argument over such punishment will be based on whether harm or good is being done overall. "Hitting a masochist" would not necessarily qualify as harm, especially if you will stop when the masochist asks you to.

Second... when we look at harm or benefit, we have to look at the system of people affected. This isn't to say that if {one person in the system benefits more than another is harmed} then it's ok, because then we get into the complexity of what I'll call the "benefit aggregation function" -- which involves values that probably are individual.

It's also reasonable (and often necessary) to look at a decision's effects on society (if you let one starving person get away with stealing a cookie under a particular circumstance, then other hungry people may think it's always okay to steal cookies) in the present and in the long term. This is the basis of many arguments against gay marriage, for example -- the idea that society will somehow be harmed -- and hence individuals will be harmed as society crumbles around them -- by "changing the definition of marriage". (The evidence is firmly against those arguments, but that's not the point.)

Third: I'm arguing that "[avoiding] harm" is the ultimate basis for all empathetic-human* arguments about morality, and I suggest that this would be true for any successful social species (not just humans). (*by which I mean "humans with empathy" -- specifically excluding psychopaths and other people whose primary motive is self-gratification)

I suggest that If you can't argue that an action causes harm of some kind, you have absolutely no basis for claiming the action is wrong (within the context of discussions with other humans or social sophonts).

You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?

Comment author: mattnewport 25 March 2010 10:51:43PM 0 points [-]

I think you are wrong but I don't think you've even defined the goal clearly enough to point to exactly where. Some questions:

  • How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
  • How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
  • How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
  • Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?

There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I'd like to know your answers to these questions first.

Comment author: woozle 26 March 2010 01:35:29AM 0 points [-]

Points 1 and 2:

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

Actually, on thinking about it, I'm thinking "freedom" is another one of those "shorthand" values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]

The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.

(*Should I call it "subjective suffering"? "woozalian suffering"?)

Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

Point 4: Yes, with some major caveats...

First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we're not inviting those folks to the discussion table at this level.

Second... many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values -- but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one's self. ...and (b) only supercedes (a) for people whose self-interest outweighs their integrity.

Comment author: mattnewport 18 March 2010 06:02:34PM *  0 points [-]

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

By 'minimize suffering' I assume you mean some kind of utilitarian conception of minimizing aggregate suffering equally weighted across all humans (and perhaps extended to include animals in some way). If so this would be one area we differ. Like most humans, I don't apply equal weighting to all other individuals' utilities. I don't expect other people's weightings to match my own, nor do I think it would be better if we all aimed to agree on a unique set of weightings. I care more about minimizing the suffering of my family and friends than I do about some random stranger, an animal, a serial killer, a child molester or a politician. I do not think this is a problem.

Comment author: woozle 25 March 2010 10:42:21PM 2 points [-]

Much discussion about "minimization of suffering" etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

(Tentative definition: "suffering" is any kind of discomfort over which the subject has no control.)

All other values (from any part of the political continuum) -- "human rights", "justice", "fairness", "morality", "faith", "loyalty", "honor", "patriotism", etc. -- are not rational terminal values.

This isn't to say that they are useless. They serve as a kind of ethical shorthand, guidelines, rules-of-thumb, "philosophical first-aid": somewhat-reliable predictors of which actions are likely to cause harm (and which are not) -- memes which are effective at reducing harm when people are infected by them. (Hence society often works hard to "sugar coat" them with simplistic, easily-comprehended -- but essentially irrelevant -- justifications, and otherwise encourage their spread.)

Nonetheless, they are not rational terminal values; they are stand-ins.

They also have a price:

  • they do not adapt well to changes in our evolving rational understanding of what causes harm/suffering, so that rules which we now know cause more suffering than benefit are still happily propagating out in the memetic wilderness...
  • any rigid rule (like any tool) can be abused.

...

I seem to have taken this line of thought a bit further than I meant to originally -- so to summarize: I'd really like to hear if anyone believes there are other rational terminal values other than (or which cannot ultimately be reduced to) "minimizing suffering".

Comment author: Jack 24 March 2010 02:55:33AM *  1 point [-]

I don't think any bumper sticker successfully encapsulates my terminal values. I'm highly sympathetic to ethical pluralism and particularism. I value fairness and happiness (politically I'm a cosmopolitan Rawlsian liberal) with additional values of freedom and honesty which under certain conditions can trump fairness and happiness. I also value the existence of what I would recognize as humanity and limiting the possibility of the destruction of humanity can sometimes trump all of the above. Values weighted toward myself, my family and friends. It's possible all of these things could be reduced to more fundamental values, I'm not sure. There are cases where I have no good procedure for evaluating which outcome is more desirable.

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

It is worth noting, if you think these are rationally justifiable somehow, that maximizing two different values is going to leave with an incomplete function in some circumstances. Some options will maximize not suffering but fail to maximize freedom and vice versa.

If anyone has different terminal values, I'd like to hear more about that.

If you were looking for people here with different values, see above (though I don't know how much we differ). But note that the people here are going to have heavy overlap on values for semi-obvious reasons. But there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?

Comment author: woozle 25 March 2010 09:13:35PM *  0 points [-]

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely unnecessary suffering.

You are right that maximizing two values is not necessarily solvable. The apparaent duality of the goal as stated has more to do with the shortcomings of natural language than it does with the goals being contradictory. If you could assign numbers to "suffering" (S) and "individual freedom" (F), I would think that the goal would be to maximize aS + bF for some values of a and b which have yet to be worked out.

[Addendum: this function may be oversimplifying things as well; there may be one or more nonlinear functions applied to S and/or F before they are added. What I said below about the possible values of a and b applies also to these functions. A better statement of the overall function would probably be fa(S) + fb(F), where fa() and fb() are both - I would think - positively-sloped for all input values.]

[Edit: ACK! Got confused here; the function for S would be negative, i.e. we want less suffering.]

[Another edit in case anyone is still reading this comment for the first time: I don't necessarily count "death" as non-suffering; I suppose this means "suffering" isn't quite the right word, but I don't have another one handy]

The exact values of a and b may vary from person to person -- perhaps they even are the primary attributes which account for one's political predispositions -- but I would like to see an argument that there is some other desirable end goal for society, some other term which belongs in this equation.

...there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?

I do not deny this, but I also do not believe they are being rational in those assignments. Why should the "morality" of a particular act matter in the slightest if it has been shown to be completely harmless?

Comment author: woozle 25 March 2010 11:40:38AM 1 point [-]

Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.

The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market is actually due to some vestiges of government regulation of the industry -- or am I misunderstanding?

(No rush; I shouldn't be spending so much time on this either... but I think it's important to pursue these lines of thought to some kind of conclusion.)

Comment author: woozle 25 March 2010 11:54:19AM 1 point [-]

A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.

Comment author: mattnewport 25 March 2010 01:36:26AM *  1 point [-]

Can you give me some examples (mainly of genuine deregulation -- I got the financial industry non-deregulation; will have to ponder that example)?

I don't have time to reply to your whole post right now (I'll try to give a fuller response later) but telecom deregulation is the first example that springs to mind of (imperfect but) largely successful deregulation.

Comment author: woozle 25 March 2010 11:40:38AM 1 point [-]

Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.

The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market is actually due to some vestiges of government regulation of the industry -- or am I misunderstanding?

(No rush; I shouldn't be spending so much time on this either... but I think it's important to pursue these lines of thought to some kind of conclusion.)

View more: Prev | Next