Peterdjones comments on Logical Pinpointing - Less Wrong

62 Post author: Eliezer_Yudkowsky 02 November 2012 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (338)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 01 November 2012 05:03:21PM 0 points [-]

the concept of objective value would make no sense,

if your moral values aren't objective, why would anyone else be beholden to them? And how could they be moral if they don't regulate others' behaviour?

Logic might not be stable or it might change later, we don't have any way of knowing

Why would it change, absent our changing the axioms? Do you think it is part of the universe?

Comment author: DaFranker 01 November 2012 05:19:24PM *  1 point [-]

if your moral values aren't objective, why would anyone else be beholden to them? And how could they be moral if they don't regulate others' behaviour?

To the first question: Possibly because your moral values arose from a process that was almost exactly the same for other individuals, and such it's reasonable to infer that their moral values might be rather similar than completely alien?

To the second: "And how could they be (blank?) if they don't regulate others' behaviour?", by which I mean, what do you mean by "moral"? What makes a value a "moral" value or not in this context?

I'm not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.

Comment author: Peterdjones 01 November 2012 05:31:27PM *  1 point [-]

Possibly because your moral values arose from a process that was almost exactly the same for other individuals,

Why describe them as subjecive when they are intersubjective?

I'm not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.

It would be necessary for them to be moral values and not something else, like aesthetic values. Because morality is largely to regulate interactions between individuals. That's its job. Aesthetics is there to make things beautiful, logic is there to work things out...

Comment author: TheOtherDave 01 November 2012 05:58:21PM 0 points [-]

morality is largely to regulate interactions between individuals. That's its job.

I don't want to get into a discussion of this, but if there's an essay-length-or-less explanation you can point to somewhere of why I ought to believe this, I'd be interested.

Comment author: Peterdjones 01 November 2012 06:03:05PM 0 points [-]

I dont see that "morality is largely to regulate interactions between individuals" is contentious. Did you have another job in mind for it?

Comment author: TheOtherDave 01 November 2012 06:12:51PM 1 point [-]

Well, since you ask: identifying right actions.

But, as I say, I don't want to get into a discussion of this.

I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it's important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.

Comment author: Peterdjones 01 November 2012 06:26:34PM *  0 points [-]

Well, since you ask: identifying right actions.

Is that an end in itself?

I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it's important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.

Well, the law compells those who arent compelled by exhortation. But laws need justiication.

Comment author: TheOtherDave 01 November 2012 09:32:34PM 0 points [-]

Is that an end in itself?

Not for me, no.

Is regulating interactions between individuals an end in itself?

Comment author: Peterdjones 02 November 2012 12:17:15AM 0 points [-]

Do you think it is pointless? Do you think it is a prelude to something else>?

Comment author: TheOtherDave 02 November 2012 12:19:12AM 1 point [-]

I think identifying right actions can be, among other things, a prelude to acting rightly.

Is regulating interactions between individuals an end in itself?

Comment author: chaosmosis 01 November 2012 07:45:30PM -2 points [-]

Is that an end in itself?

What does that concept even mean? Are you asking if there's a moral obligation to improve one's own understanding of morality?

Well, the law compells those who arent compelled by exhortation. But laws need justiication.

The justification for laws can be a combination of pragmatism and the values of the majority.

Comment author: Peterdjones 01 November 2012 08:04:13PM 0 points [-]

Does it serve a purpose by itself? Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.

The justification for laws can be a combination of pragmatism and the values of the majority.

if the values of the majority arent justified, how does thast justify laws?

Comment author: TheOtherDave 01 November 2012 09:33:03PM 1 point [-]

Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.

Also, sometimes it's a prelude to acting rightly and not acting wrongly.

Comment author: chaosmosis 01 November 2012 08:07:52PM -1 points [-]

Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don't. I do find it instrumentally valuable.

Laws are justified because subjective desires are inherently justified because they're inherently motivational. Many people reverse the burden of proof, but in the real world it's your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That's the way we're designed and there's no getting around it. I prefer it that way and that's its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.

Comment author: DaFranker 01 November 2012 06:09:48PM 0 points [-]

I actually see that as counter-intuitive.

"Morality" is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think "greater total utility over multiple agents, whose value systems (utility functions) may vary". Morality seems largely about taking actions and making decisions which achieve greater utility.

Comment author: chaosmosis 01 November 2012 07:53:25PM *  0 points [-]

I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don't inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.

I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they've overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.

Comment author: Peterdjones 01 November 2012 08:18:28PM 1 point [-]

Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).

Comment author: chaosmosis 01 November 2012 08:42:06PM -1 points [-]

In my view there's a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn't relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.

Comment author: Peterdjones 01 November 2012 08:46:41PM *  1 point [-]
Comment author: DaFranker 01 November 2012 08:31:51PM -1 points [-]

Your usage of the words "subjective" and "objective" is confusing.

Utilitarianism doesn't forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize "morality" (total sum utility).

It is "objective" in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also "objective" in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.

However, it is also "subjective" in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don't, but that's a theoretical nitpick).

Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for, AFAIK.

Comment author: Peterdjones 01 November 2012 08:37:05PM 1 point [-]

Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for,

Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?

Comment author: chaosmosis 01 November 2012 08:43:27PM *  0 points [-]

I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.

Comment author: DaFranker 01 November 2012 08:12:05PM *  -1 points [-]

I share this view. When I appear to forfeit some utility in favor of someone else, it's because I'm actually maximizing my own utility by deriving some from the knowledge that I'm improving the utility of other agents.

Other agents's utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an "innate goodness of humanity" are mind-projecting their own value-of-others'-utility.

Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It's also possible that this is a purely environment-learned value that becomes "terminal" in the process of being trained into the brain's reward centers due to its instrumental value in many situations.

Comment author: Decius 02 November 2012 03:29:31AM -1 points [-]

Because morality is largely to regulate interactions between individuals. That's its job.

You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.

Morality is a useful tool to regulate interactions between individuals. There are efforts to make it a better tool for that purpose. That does not mean that morality should be used to regulate interactions.

Comment author: Peterdjones 02 November 2012 08:37:09AM 1 point [-]

You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.

Human artifacts are generally created to do jobs, eg hammers

Morality is a useful tool

Tool. Like i said.

That does not mean that morality should be used to regulate interactions.

Does that mean you have a better tool in mind, or that interaction don't need regulation?

Comment author: Decius 02 November 2012 02:52:41PM -1 points [-]

If I put a hammer under a table to keep the table from wobbling, am I using a tool or not? If the hammer is the only object within range that is the right size for the table, and there is no task which requires a weighted lever, is the hammer intended to balance the table simply by virtue of being the best tool for the job?

Fit-for-task is a different quality than purpose. Hammers are useful tools to drive nails, but poor tools for determining what nails should be driven. There are many nails that should not be driven, despite the presence of hammers.

Comment author: Peterdjones 02 November 2012 03:00:21PM 1 point [-]

If I put a hammer under a table to keep the table from wobbling, am I using a tool or not?

f you can't bang in nails with it, it isnt a hammer. What you can do with it isn't relevant.

There are many nails that should not be driven, despite the presence of hammers.

???

So we can judge things morally wrong, because we have a tool to do the job, but we shouldn't in many cases, because...? (And what kind of "shouldn't" is that?)

Comment author: Decius 03 November 2012 12:36:46AM 1 point [-]

If you can't bang in nails with it, it isnt a hammer. What you can do with it isn't relevant.

By that, the absence of nails makes the weighted lever not a hammer. I think that hammerness is intrinsic and not based on the presence of nails; likewise morality can exist when there is only one active moral agent.

Comment author: DaFranker 02 November 2012 03:09:51PM *  0 points [-]

The metaphor was that you could, in principle, drive nails literally everywhere you can see, including in your brain. Will you agree that one should not drive nails literally everywhere, but only in select locations, using the right type of nail for the right location? If you don't, this part of the conversation is not salvageable.

Comment author: Peterdjones 02 November 2012 03:18:44PM *  2 points [-]

What is that supposed to be analgous to? If you have a workable system of ethics, then it doens't make judgments willy nilly, anymore than a workable system of logic allows quodlibet.

Comment author: DaFranker 02 November 2012 03:29:13PM 0 points [-]

The metaphor was that you could, in principle, make rules and laws for literally any possible action, including living. Will you agree that one should not make fixed rules for literally all actions, but only for select high-negative-impact ones, using the right type of rule for the right action?

(Edited for explicit analogy.)

Basically, it's not because you have a morality (hammer) that happens to be convenient for making laws and rules of interactions (balancing the table) that morality is necessarily the best and intended tool for making rules and that morality itself tells you what you should make laws about or that you even should make laws in the first place.

Comment author: DaFranker 01 November 2012 05:35:57PM -2 points [-]

Why describe them as subjecive when they are intersubjective?

Because they're not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans? And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?

I have warning lights that there's an argument about definitions here.

Comment author: Peterdjones 01 November 2012 05:47:27PM 1 point [-]

Because they're not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans?

That would make them not-objective. Subjective and intersubjective remain as options.

And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?

Then, again, why would anyone else be beholden to my values?

Comment author: DaFranker 01 November 2012 05:55:16PM *  0 points [-]

Because valuing others' subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

If one posits that by working together we can achieve an utopia where each individual's values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others' values, would it not follow that it's in everyone's best interests for everyone to build and follow such models?

The free-loader problem is an obvious downside of the above simplification, but that and other issues don't seem to be part of the present discussion.

Comment author: Peterdjones 01 November 2012 06:11:28PM 1 point [-]

Because valuing others' subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

That doesn't make them beholden--obligated. They can opt not to play that game. They can opt not to vvalue winning.

If one posits that by working together we can achieve an utopia where each individual's values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others' values, would it not follow that it's in everyone's best interests for everyone to build and follow such models?

Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.

Comment author: DaFranker 01 November 2012 06:21:18PM *  -1 points [-]

Could you taboo "beholden" in that first? I'm not sure the "feeling of moral duty borned from guilt" I associate with the word "obligated" is quite what you have in mind.

They can opt not to play that game. They can opt not to value winning.

Within context, you cannot opt to not value winning. If you wanted to "not win", and the preferred course of action is to "not win", this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

In other words, you just didn't truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

(sorry if I'm arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent's utility to its maximum possible value if there exists a maximum for that agent's function)

Comment author: Peterdjones 01 November 2012 06:43:57PM 1 point [-]

Within context, you cannot opt to not value winning. If you wanted to "not win", and the preferred course of action is to "not win", this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

Games emerge where people have things other people value. If someone doens't value those sorts of things, they are not going to game-play.

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

I don't see where higher-tier functions come in.

You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That's a tall order.

Comment author: chaosmosis 01 November 2012 07:59:43PM *  0 points [-]

I wouldn't let my values be changed if doing so would thwart my current values. I think you're contending that the utopia would satisfy my current values better than the status quo would, though.

In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don't have very strong ones but I think they're in here somewhere and for some things). You would call this a hidden utility function, I don't think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.

Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.

Comment author: Peterdjones 01 November 2012 08:08:02PM 1 point [-]

Do you think the utopia is feasible?