chaosmosis comments on Logical Pinpointing - Less Wrong

62 Post author: Eliezer_Yudkowsky 02 November 2012 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (338)

You are viewing a single comment's thread. Show more comments above.

Comment author: chaosmosis 01 November 2012 07:53:25PM *  0 points [-]

I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don't inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.

I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they've overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.

Comment author: Peterdjones 01 November 2012 08:18:28PM 1 point [-]

Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).

Comment author: chaosmosis 01 November 2012 08:42:06PM -1 points [-]

In my view there's a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn't relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.

Comment author: Peterdjones 01 November 2012 08:46:41PM *  1 point [-]
Comment author: chaosmosis 01 November 2012 08:48:56PM *  -1 points [-]

My computer won't load the website because it's apparently having issues with flash, can you please summarize? If you're just making a distinction between yourself and your beliefs, sure, I'll concede that. I was a bit sloppy with my terminology there.

Comment author: Peterdjones 01 November 2012 09:05:00PM 1 point [-]

Its not "My beliefs" either." justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows."

Comment author: chaosmosis 01 November 2012 10:20:36PM -1 points [-]

Okay. I think I've explained the justification then. Specific moral systems aren't necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. "My values tell me X, therefore X is moral" is the form of justification that I've been defending.

Comment author: Peterdjones 02 November 2012 12:15:52AM 1 point [-]

Yet again, you run into the problem that you need it to be wrong for other people to murder you, which you can't justify with your values alone.

Comment author: chaosmosis 02 November 2012 12:42:49AM -1 points [-]

No I don't. I need to be stronger than the people who want to murder me, or to live in a society that deters murder. If someone wants to murder me, it's probably not the best strategy to start trying to convince them that they're being immoral.

You're making an argumentum ad consequentum. You don't decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards. Just because you don't like the type of system that morality leads to overall doesn't mean that you're justified in ignoring other moral arguments.

The benefit of my system is that it's right for me to murder people if I want to murder them. This means I can do things like self defense or killing Nazis and pedophiles with minimal moral damage. This isn't a reason to support my system, but it is kind of neat.

Comment author: Peterdjones 02 November 2012 01:08:05AM *  1 point [-]

No I don't. I need to be stronger than the people who want to murder me,

That's giving up on morality not defending subjective morality.

or to live in a society that deters murder.

Same problem. That's either group morality or non morality.

If someone wants to murder me, it's probably not the best strategy to start trying to convince them that they're being immoral.

I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.

You're making an argumentum ad consequentum. You don't decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards.

I am starting wth what kind of morality it would be adequate to have. If you can't bang in a nail with it, it isn't a hammer.

Just because you don't like the type of system that morality leads to overall

Where on eath did I say that?

The benefit of my system is that it's right for me to murder people if I want to murder them.

That's not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for "i can do whatever I want regardless".

This means I can do things like self defense

Justifiable self defense is not murder. You seem to have confused ethical objectiv ism (morality is not just personal preference) with ethical absolutism (moral principles have no exceptions). Read yer wikipedia!

Comment author: DaFranker 01 November 2012 08:31:51PM -1 points [-]

Your usage of the words "subjective" and "objective" is confusing.

Utilitarianism doesn't forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize "morality" (total sum utility).

It is "objective" in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also "objective" in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.

However, it is also "subjective" in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don't, but that's a theoretical nitpick).

Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for, AFAIK.

Comment author: Peterdjones 01 November 2012 08:37:05PM 1 point [-]

Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for,

Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?

Comment author: DaFranker 01 November 2012 08:44:36PM *  -1 points [-]

No form of the official theory in the papers I read, at the very least.

Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent's weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin's size and specific numbers and uncertainty values will vary by system.

I've never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.

Comment author: Peterdjones 02 November 2012 12:24:51AM 1 point [-]

No form of the official theory in the papers I read, at the very least.

Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent's weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin's size and specific numbers and uncertainty values will vary by system.

It seems to me that these two paragraphs contaradict each other. Do you think the "he should" means something different to "it is right for him to do so"?

Comment author: DaFranker 02 November 2012 01:06:47PM 0 points [-]

No, they don't have any major differences in utilitarian systems.

It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.

Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.

If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I've said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what's exposed on wikipedia's page on utilitarianism.

Comment author: Peterdjones 02 November 2012 02:09:32PM 2 points [-]

If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent,

Isn't that decision theory?

Comment author: chaosmosis 01 November 2012 08:43:27PM *  0 points [-]

I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.

Comment author: DaFranker 01 November 2012 08:48:01PM *  1 point [-]

Yeah.

When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.

I'm currently convinced that there's at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.

Comment author: DaFranker 01 November 2012 08:12:05PM *  -1 points [-]

I share this view. When I appear to forfeit some utility in favor of someone else, it's because I'm actually maximizing my own utility by deriving some from the knowledge that I'm improving the utility of other agents.

Other agents's utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an "innate goodness of humanity" are mind-projecting their own value-of-others'-utility.

Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It's also possible that this is a purely environment-learned value that becomes "terminal" in the process of being trained into the brain's reward centers due to its instrumental value in many situations.