thomblake comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 26 January 2010 06:22:42PM 0 points [-]

Geez, can we drop the "utility functions" and all the other consequentialism debris for like a week sometime? It would be a welcome respite.

Utility functions describe your preferences. Their existence doesn't presuppose consequentialism, I don't think.

Comment author: thomblake 26 January 2010 06:53:40PM 1 point [-]

Utility functions describe your preferences. Their existence doesn't presuppose consequentialism, I don't think.

There are a few things meant by "consequentialism". It can be as general as "outcomes/consequences are what's important when making decisions" to as specific as "Mill's Utilitarianism". The term was only coined mid-20th century and it's not-very-technical jargon, so it hasn't quite settled yet. I'm pretty sure the use here is more on the general side.

Other theories about what's important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.

Comment author: komponisto 26 January 2010 07:12:57PM 0 points [-]

Other theories about what's important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.

Why not, if they're about preferences?

My understanding is that a utility function is nothing but a scaled preference ordering, and I interpret ethical debates as being disputes about what one's preferences --i.e. one's utility function -- ought to be.

For example (to oversimplify and caricature): the "consequentialist" might argue that one should be willing to torture one person to save 1000 from certain death, while the "deontologist" argues that one should not because Torture is Wrong. Both sides of this argument are asserting preferences about the state of the world: the "consequentialist" assigns higher utility to the situation in which 1000 people are alive and you're guilty of torture, and the "deontologist" assigns higher utility to the situation in which the 1000 have perished but your hands are clean.

Comment author: Alicorn 26 January 2010 07:20:04PM *  9 points [-]

This is called the "consequentialist doppelganger" phenomenon, when I've heard it described, and it's very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:

  1. What would the world be like if everyone followed Non-Consequentialism X?
  2. You should act to achieve the outcome yielded by Step 1.

But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.

Comment author: komponisto 26 January 2010 07:33:20PM 3 points [-]

But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

Mainly, I just wanted to point out that when whoever-it-was above mentioned "your utility function", you probably should have interpreted that as "your preferences".

Comment author: Blueberry 26 January 2010 07:38:28PM 5 points [-]

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

There should be a "Deontology for Consequentialists" post, if there isn't already.

Comment author: Alicorn 26 January 2010 07:49:29PM 6 points [-]

I might write that.

Comment author: thomblake 26 January 2010 08:01:16PM 5 points [-]

Perhaps I should write "Utilitarianism for Deontologists". Here goes:

"Follow the maxim: 'Maximize utility'".

Comment author: ciphergoth 27 January 2010 08:26:25AM 4 points [-]

Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn't about maximizing anything; that's a deontologist's way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn't some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that's all.

Comment author: Alicorn 26 January 2010 08:03:05PM 3 points [-]

I'd like to see you write Virtue Ethics for Consequentialists, or for Deontologists.

Comment author: Jack 26 January 2010 08:36:18PM 3 points [-]

or for Deontologists.

"Being virtuous is obligatory, being vicious is forbidden."

This feels like cheating.

Comment author: Eliezer_Yudkowsky 26 January 2010 08:12:09PM *  3 points [-]

Virtue Ethics for Consequentialists

"Do that which leads to people being virtuous."

Comment author: Blueberry 26 January 2010 07:53:04PM 1 point [-]

Please do. I'd love to read it.

Comment author: komponisto 26 January 2010 07:51:17PM 0 points [-]

Ha! I was about to say, "I wonder if Alicorn might be interested in writing such a post".

Comment author: Jack 26 January 2010 07:45:00PM *  3 points [-]

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

Not to butt in but "x is morally obligatory" is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.

Comment author: Blueberry 26 January 2010 07:17:46PM 0 points [-]

You may run into problems trying to create a utility function for some forms of deontology, at least if you're mapping into the real numbers. For instance, some deontologists would say that killing a person has infinite negative utility which can't be cancelled out by any number of positive utility outcomes.

Comment author: komponisto 26 January 2010 07:23:38PM 0 points [-]

That wouldn't be mapping into the real numbers, of course, since infinity isn't a real number.

As I understand it, utility functions are supposed to be equivalence classes of mappings into the real numbers, where two such mappings are said to be equivalent if they are related by a (positive) affine transformation (x -> ax + b where a>0).

Comment author: wnoise 02 February 2010 12:20:38AM 0 points [-]

Why do you think this restricts to positive affine transformations, rather than any strictly monotonic transformation?

Comment author: Nick_Tarleton 02 February 2010 12:23:54AM 3 points [-]

Other monotonic transformations don't preserve preferences over gambles.

Comment author: wnoise 02 February 2010 12:45:21AM 0 points [-]

Ah, right, that's what I was missing. Thanks.

Comment author: Jordan 02 February 2010 12:29:26AM 0 points [-]

A strictly monotonic transformation will preserve your preference ordering of states but not your preference ordering for actions to achieve those states. That is, only affine transformations preserve the ordering of expected values of different actions.

Comment author: Blueberry 26 January 2010 07:27:27PM 0 points [-]

Right, which is why I was saying that some ethical theories can't be expressed by a utility function. And there could be many such incomparable qualities: even adding in infinity and negative infinity may not be enough (though the transfinite ordinals, or the surreal numbers, might be).

I'm surprised at that +b, because that doesn't preserve utility ratios.

Comment author: komponisto 26 January 2010 07:48:11PM *  1 point [-]

Right, which is why I was saying that some ethical theories can't be expressed by a utility function.

Ah, I see. But I'm still not actually sure that's true, though...see below.

I'm surprised at that +b, because that doesn't preserve utility ratios.

Indeed not; utilities are measured on an interval scale, not a ratio scale. There's no "absolute zero". (I believe Eliezer made a youthful mistake along these lines, IIRC.) This expresses the fact that utility functions are just (scaled) preference orderings.