Alicorn comments on White Lies - Less Wrong

38 Post author: ChrisHallquist 08 February 2014 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (893)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 14 February 2014 12:37:20AM 3 points [-]

Is this only a linguistic argument about what to call morality?

You could re-name everything, but if you renamed my deontological rules "fleeb", I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I'm pretty sure it's not just linguistic.

Is there a reason you prefer to limit the domain of morality?

Because there's already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards - "prudence", "axiology".

Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?

Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.

Also, could you clarify the idea of obligations, are then any obligations which don't emanate from the rights of another person?

Yes; I have a secondary rule which for lack of better terminology I call "the principle of needless destruction". It states that you shouldn't go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.

Are there any obligations which emerge inherently from a person's humanity and are therefore not waivable?

"Humanity" is the wrong word; I apply my ethics across the board to all persons regardless of species. I'm not sure I understand the question even if I substitute "personhood".

Comment author: jazmt 16 February 2014 12:45:58AM -1 points [-]

Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won't the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don't end up in a situation where we will act unjust.

I think I am misunderstanding something in your position, since it seems to me that you don't seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))

By obligations which emerge from a person's personhood which are not waivable, I mean that they emerge from the self and not in relation to another's rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)

Comment author: Alicorn 16 February 2014 06:30:03AM 2 points [-]

What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation.

The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don't fuse this prettily unless you badly misunderstand at least two of them, I'm afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don't think this works.)

I think I am misunderstanding something in your position, since it seems to me that you don't seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))

Absolutely not. Did you read Deontology for Consequentialists?

I still don't know what you mean by "emerge from the self", but if I understand the class of thing you're pointing out with the suicide example, I don't think I have any of those.

Comment author: jazmt 16 February 2014 06:25:21PM 0 points [-]

Yes I read that post, (Thank you for putting in all this time clarifying your view)

I don't think you understood my question. since "The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. " is not viewing 'not lying' as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn't watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won't your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)

Comment author: Alicorn 16 February 2014 10:06:02PM 3 points [-]

That's a very long paragraph, I'm going to do my best but some things may have been lost in the wall of text.

I understand the difference between terminal and instrumental values, but your conclusion doesn't follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of "large" is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.

a deontological obligation to maximize utility

AAAAAAAAAAAH

you should be prudent in achieving your deontological obligations

It is prudent to be prudent in achieving your deontological obligations. Putting "should" in that sentence flirts with equivocation.

won't your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations

I think it's possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it's not impossible.

Comment author: jazmt 17 February 2014 01:18:18AM 0 points [-]

Thank you, I think I understand this now.

To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different

I still don't understand your view of utilitarian consequentialism, if 'maximizing utility' isn't a deontological obligation emanating from personhood or the like, where does it come from?

Comment author: Alicorn 17 February 2014 01:49:32AM 1 point [-]

A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.

I don't think I understand the last paragraph. Can you rephrase?

Comment author: jazmt 17 February 2014 03:36:27AM -1 points [-]

Why don't you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn't deontological where does it come from?

Comment author: Alicorn 17 February 2014 06:53:30AM 1 point [-]

To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from - wanting things, appreciating the nature of cause and effect, etc.

Comment author: jazmt 18 February 2014 01:56:00AM 1 point [-]

Thank you for all of your clarifications, I think I now understand how you are viewing morality.

Comment author: SaidAchmiz 17 February 2014 07:08:30AM 0 points [-]

Could you elaborate on what this thing you call "morality" is?

To me, it seems like the "morality" that deontology aspires to be, or to represent / capture, doesn't actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of "morality" you seem to be referring to.

Comment author: SaidAchmiz 17 February 2014 03:43:15AM *  0 points [-]

The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it's an important distinction.

Edit: And even more specifically, it's total utilitarian.

Comment author: Eugine_Nier 17 February 2014 09:08:24PM 0 points [-]

It's VNM consequentialist, which is a broader category then the common meaning of "utilitarian".

Comment author: hyporational 17 February 2014 06:07:40AM 0 points [-]

Keep up the good work. Any idea where this conflation might have come from? It's widespread enough that there might be some commonly misunderstood article in the archives.