Unknown comments on The Moral Void - Less Wrong

31 Post author: Eliezer_Yudkowsky 30 June 2008 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Unknown 30 June 2008 09:05:39AM 16 points [-]

"I mean... if an external objective morality tells you to kill babies, why should you even listen?"

This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"

And we have seen many who literally made this argument.

Comment author: [deleted] 08 February 2012 06:16:51PM *  14 points [-]

Maybe they are right.

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off. In other words we seem to be willing to pay a price for equality. Why wouldn't this work in the other direction? Maybe we prefer to induce more suffering overall if this prevents a tiny minority suffering obscenely.

Too many people seem to think perfectly equally weighed altruism (everyone who shares the mystical designation of "person" has a equal weight and after that you just do calculus to maximize overall "goodness") that sometimes hides under the word "utilitarianism" on this forum, is anything but another grand moral principle that claims to, but fails, to really compactly represent our shards of desire. If you wouldn't be comfortable building an AI to follow that rule and only that rule, why are so many people keen on solving all their personal moral dilemmas with it?

Comment author: thomblake 08 February 2012 08:56:13PM 1 point [-]

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else being moderately well off.

Sure, horrible people.

mind-killed

Comment author: [deleted] 08 February 2012 09:16:22PM *  4 points [-]

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off.

But I agree with you in sense. Historically lots of horrible people have vastly overpaid (often in blood) and overvalued that particular good according to my values too.

Comment author: thomblake 08 February 2012 09:59:27PM 2 points [-]

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) a example of this

Yes.

Comment author: [deleted] 08 February 2012 11:19:19PM 1 point [-]

Ok just checking, surprisingly many people miss this. :)

Comment author: DanielLC 20 June 2012 05:44:09AM 5 points [-]

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:

Are you sure?

If you take a concave function, such as a log, of the net happiness of each individual, and maximize the sum, you'd always prefer equality to inequality when net happiness is held constant, and you'd always prefer a higher minimum happiness regardless of inequality.

Comment author: Articulator 27 March 2014 06:24:53AM 1 point [-]

Excellent! Thanks for the mathematical model! I've been trying to work out how to describe this principle for ages.

Comment author: Multiheaded 10 February 2012 06:26:59PM 0 points [-]

Konkvistador, I applaud your thougtful and weighed approach to the problem of equality. It has been troubling me too, and I'm glad to see that you're careful not to lean in any one direction before observing the wider picture. That's a grave matter indeed.

Comment author: jacoblyles 18 July 2012 11:47:40PM *  1 point [-]

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

Comment author: wedrifid 19 July 2012 11:49:02AM 2 points [-]

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

There is no contradiction between this post and Eliezer's dust specks post.

Comment author: Kenny 05 April 2013 12:48:55AM 0 points [-]

The comment was making the opposite point, namely that some people refuse to accept that there is even a common 'utilon' with which torture and 'dust specks' can be compared.

Comment author: [deleted] 05 April 2013 01:33:24AM 3 points [-]

By what criteria do we judge that there should be a common 'utilon'?

Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.

Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.

Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.

Comment author: TimS 05 April 2013 01:43:23AM 0 points [-]

I can be VNM rational and choose specks.

I'm confused. I'm not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.

And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.

I assume I'm misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?

Comment author: [deleted] 05 April 2013 02:22:44AM 2 points [-]

I'm confused. I'm not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.

hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. "Continuous" in particular requires your outcome space to have a topology, which it may not, and even if it does, there's still nothing in VNM that would require continuity.

And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.

Not necessarily. To choose torture by the usual argument the following must hold:

  1. You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where "partial utilities" means roughly that your final utility function is a sum of the partial utilities.

  2. The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.

  3. Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).

I am very skeptical of 1. Once you look at functions as "arbitrary map from set A to set B", special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).

I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.

As Eliezer says (but doesn't seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn't use it. It doesn't matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn't encode my idealized preferences, it's junk.

The only criteria by which a utility function can be judged is the preferences it produces.

That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in "pinpointing utility", but it really requires its own post. So many posts to write, and no time!

I assume I'm misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?

You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.