Juno_Watt comments on Privileging the Question - Less Wrong

102 Post author: Qiaochu_Yuan 29 April 2013 06:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (311)

You are viewing a single comment's thread. Show more comments above.

Comment author: Juno_Watt 07 May 2013 06:46:53PM *  0 points [-]

I think morality is behaving so as to take into account the values and preferences of others as well as ones own. You can succed or fail in that, hence "accurate".

Morality may manifest in the form of a feeling for many people, but not for everybody and not all feelings are equal. So I don't think that is inherent, or definitional.

I don't think the sprout analogy works, because your feeling that you don't like sprouts doesn't seriously affect others, but the psychoaths fondndess for murder does.

The feelings that are relevant to morality are the empathic ones, not personal preferences. That is a clue that morality is about behaving so as to take into account the values and preferences of others as well as ones own.

if you think morlaity is the same as a personal preference...what makes it morality? Why don't we just have one word and one way of thinking?

Comment author: someonewrongonthenet 08 May 2013 07:05:22PM *  0 points [-]

what makes it morality? Why don't we just have one word and one way of thinking?

Because they feel different to us from the inside - for the same reason that we separate "thinking" and "feeling" even though in the grand scheme of things they are both ways to influence behavior.

Mathematical statements aren't empirical facts eitherl but convergence is uncontroversial there.

In Math, empirical evidence is replaced by axioms. In Science, the axioms are the empirical evidence.

The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?

Are you quite sure that morlaity isn't implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along?

I think morality is behaving so as to take into account the values and preferences of others as well as ones own

I'm very, very sure that my morality doesn't work that way.

Imagine you lived on a world with two major factions, A and B.

A has a population of 999999. B has a population of 1000.

Every individual in A has a very mild preference for horrifically torturing B, and the motivation is sadism and hatred. The torture and slow murder of B is a bonding activity for A, and the shared hatred keeps the society cohesive.

Every individual in B has a strong, strong preference not to be tortured, but it doesn't even begin to outweigh the collective preferences of A.

From the standpoint of preference utilitarianism, this scenario is analogous to Torture vs. Dust Specks. Preference Utilitarians choose torture, and a good case could be made even under good old human morality to choose torture as the lesser of two evils. This is a problem which I'd give serious weight to choosing torture

Preference utilitarian agents would let A torture B - "shut up and multiply". However, from the standpoint of my human morality, this scenario is very different from torture vs. dust specks, and I wouldn't even waste a fraction of a second in deciding what is right in this scenario. Torture for the sake of malice is wrong (to me) and it really doesn't matter what everyone else's preferences are - if it's in my power, I'm not letting A torture B!

Are you quite sure that morlaity isn't implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along?

Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can't generalize from the way morality works in humans to the way it might work in all possible societies of entities.

Comment author: Juno_Watt 10 May 2013 11:53:38AM 0 points [-]

Mathematical statements aren't empirical facts eitherl but convergence is uncontroversial there.

In Math, empirical evidence is replaced by axioms.

The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?

Agreement isn't important: arguments are important. You apparently made the argument that convergence on morality isn't possible because it would require empirically detectable moral objects. I made the counterargument that convergence on morality could work like convergence on mathematical truth. So it seems that convergence on morlaity could happen, since there is a way it could work.

I think morality is behaving so as to take into account the values and preferences of others as well as ones own

I'm very, very sure that my morality doesn't work that way. [Argument against utilitarianism].

OK. Utilitarianism sucks. That doens't mean other objective approaches don't work -- you could be a deontologist. And it doesn't mean subjectivism does work.

Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can't generalize from the way morality works in humans to the way it might work in all possible societies of entities

Says who? We can generalise language, maths and physics beyond our instinctive System I understandings. And we have.

Comment author: someonewrongonthenet 10 May 2013 05:40:31PM *  1 point [-]

I think morality is behaving so as to take into account the values and preferences of others as well as ones own.

is the reason why I said that my morality isn't preference utilitarian. If "taking into account the values and preferences of others as well as your own", then preference utilitarianism seems to be the default way to do that.

Alright...so if I'm understanding correctly, you are saying that moral facts exist and people can converge upon them independently, in the same ways that people will converge on mathematical facts. And I'm saying we can't, and that morality is a preference linked to emotions. Neither of us have really done anything but restate our positions here. My position seems more or less inherent in my definition of morality, and I think you understand my position...but I still don't understand yours.

Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?

Can you give me a method of evaluating a moral fact which doesn't at some point refer to our instincts? Do moral facts necessarily have to conform to our instincts? As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?

Comment author: Juno_Watt 13 May 2013 09:03:31PM *  0 points [-]

is the reason why I said that my morality isn't preference utilitarian. If "taking into account the values and preferences of others as well as your own", then preference utilitarianism seems to be the default way to do that.

For lexicographers, the default is apparently deontology

"conformity to the rules of right conduct"

"Principles concerning the distinction between right and wrong or good and bad behavior."

etc.

Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?

1 A means by which communities of entities with preferences act in accordance with all their preferences.

2 Murder is wrong.

3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an arrangement in which other agents agree to refrain from removing them.

Can you give me a method of evaluating a moral fact which doesn't at some point refer to our instincts?

I don't see why I need to, Utilitarianism and ontology take preferences and intuitions into account. Your argument against utilitarinism that it comes to to conclusions which go against your instincts. That isn't just an assumption that morality has to something to do with instincts, it is a further assumption that your instincts trump all further constderations It is an assumption of subjectivism.

You are saying objectivism is false because subjectivism is true. If utilitarianism worked, it would take intuitions and preferences into account, and arrive at some arrangement that minimises the number of people who don't get their instincts or preferences satisfied. Some people have to lose You have decided that is unaccpetable because you have decided that you must not lose. But utilitariansim still works in the sense that a set of subjective prefefernces can be treated as objective facts, and aggregated together. There is nothing to stop different utilitarians (of the same variety) converging on a decision. U-ism "works" in that sense.You objection is not that convergence is not possible, but that what is converged upon is not moral, because your instincts say not.

But you don't have any argument beyond an assumption that morality just is what your instincts say. The other side of the argument doesn't have to deny the instinctive or subjective aspect of morality, it only needs to deny that your instincts are supreme. And it can argue that since morality is about the regulation of conduct amongst groups, the very notion of subjective morality is incoherent (parallel: language is all about communication, so a language that is only understood by one person is a paradox).

As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?

Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.

Comment author: someonewrongonthenet 13 May 2013 10:52:53PM *  0 points [-]

3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an arrangement in which other agents agree to refrain from removing them.

So there are several things I don't like about this..

0) It's not in their interests to play the cooperative strategy if they are more powerful, since the other agent can't remove them.

1) It's not a given that all agents do not wish to be murdered. It's only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.

2) So you sidestep this, and say that this only applies to beings that wish to be murdered. Well now, this is utilitarianism. You'd essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.

You have decided that is unaccpetable because you have decided that you must not lose.

Essentially yes. But to rephrase: I know that the behavior of all agents (including myself) will work to bring about the agent's preferences to the best of the agent's ability, and this is true by definition of what a "preference" is.

Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.

I'm not sure I follow what you mean by this. My ideas about sexual conduct are in line with my instincts. A highly religious person's ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?

(Instincts here means feelings with no rational basis, rather than genetically programmed or reflexive behaviors)

Comment author: Juno_Watt 14 May 2013 12:03:09AM *  0 points [-]

0) It's not in their interests to play the cooperative strategy if they are more powerful, since the other agent can't remove them.

I am not sure what the argument is here. The objectivist claim is not that every entity actually will be moral in practice, and it't not the claim that every agent will be interested in settling moral question: it's just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge. (Which is as objective as anything else.The uncontentious claim that mathematics is objective doens't imply that everyone is a mathematician, or knows all mathematical truths).

It's not a given that all agents do not wish to be murdered. It's only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.

I have described morality as an arrangement within a society. Alien societies might have different morality to go with their diffrent biology. That is not in favour of subjectivism, because subjectivism requires morality to vary with personal preference, not objective facts about biology. Objectivism does not mean universalism. It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens't mean the facts never vary. if they do, so will the conclusions

You'd essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.

All agents want their preferences fulfilled, and what "should" means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.

My ideas about sexual conduct are in line with my instincts. A highly religious person's ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?

The convertee. In my expererience, people are generally converted by arguments...reasoning...system 2. So when people are converted, they go from Instinct to Reason. But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.

Comment author: someonewrongonthenet 14 May 2013 05:49:13AM *  0 points [-]

it's just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge.

It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens't mean the facts never vary. if they do, so will the conclusions

But don't you see what you're doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!

The qualifier "agents who are interested in moral questions" restricts the set of agents to those who already agree with you about what morality is. Obviously, if we all start from the same moral axioms, we'll converge onto the same moral postulates - the point is that the moral axioms are arbitrarily set by the user's preferences.

All agents want their preferences fulfilled, and what "should" means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.

Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn't imply convergence.

Then Utilitarianism is the solution that all agents should maximize preferences, deontological is the solution that there exist a set of rules to follow when arbitrating conflict, etc.

Counterexample - Imagine a person who isn't religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved - would you say that this person is not exhibiting a moral preference, but something else entirely?

But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.

The vast majority of people are not convinced by argument, but by life experience. For most people, all the moral rhetoric in the world isn't as effective as a picture of two gay men crying with happiness as they get married.

That's besides the point, though - you are right that it is possible (though difficult) to alter someone's moral stance through argument alone. However, "System 1" and "System 2" share a brain. You can influence "system 1" via "system 2" - reasoning can effect feelings, and vice versa. I can use logical arguments to change someone's feelings on moral issues. That doesn't change the fact that the moral attitude stems from the feelings.

If you can establish a shared set of "moral axioms" with someone, you can convince them of the rightness or wrongness of something with logic alone. This might make it seem like any two agents can converge on morality - but just because most humans have certain moral preferences hardwired into them doesn't mean every agent has the same set of preferences. I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things... but we won't be able to convince any agent which has moral axioms that do not match with ours.

Comment author: Juno_Watt 16 May 2013 03:07:15PM *  0 points [-]

But don't you see what you're doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!

I haven't defined a set of moral claims. You asked me for an example of one claim. I can argue the point with specifying any moral conclusions. The facts I mentioned as the input to the process are not moral per se.

The qualifier "agents who are interested in moral questions" restricts the set of agents to those who already agree with you about what morality is.

In a sense, yes. But only in the sense that "agents who are interested in mathematical questions" restricts the set of agents who are interested in "mathematics" as I understand it. On the other had, nothing is implied about the set of object level claims moral philosophers would converge on.

Obviously, if we all start from the same moral axioms, we'll converge onto the same moral postulates - the point is that the moral axioms are arbitrarily set by the user's preferences.

I don't have to accept that, because I am not use a subjective criterion for "morailty". If you have a preference for Tutti Frutti, that is not a moral preference, because it does not affect anybody else. The definition of morality I am using is not based on any personal preference of mine, it's based on a recognition that morality has a job to do.

Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn't imply convergence.

If no convergence takes place, how can you have an implementable system? People are either imprisoned or not, they cannot be imprisoned for some agents but not for others.

Counterexample - Imagine a person who isn't religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved - would you say that this person is not exhibiting a moral preference, but something else entirely?

You are tacilty assuming that no action will be taken on the basis of feelings of wrongness, that nobody ever campaigns to ban things they don't like.

That doesn't change the fact that the moral attitude stems from the feelings.

If system 1 was influenced by system 2 , then what stems from system 1 stemmed from system 2, and so on. You are drawing an arbitrary line.

If you can establish a shared set of "moral axioms" with someone, you can convince them of the rightness or wrongness of something with logic alone.

If moral axioms are completely separate from everything else, then you would need to change their axioms. If they are not, then not. For instance, you can argue that some moral attitudes someone has are inconsistent with others. Consistency is not a purely moral criterion.

I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things... but we won't be able to convince any agent which has moral axioms that do not match with ours.

If "moral axioms" overlap with rational axioms, and if moral axioms are constrained by the functional role of morality, there is plenty of scope for rational agents to converge.

Comment author: someonewrongonthenet 17 May 2013 04:24:06AM *  0 points [-]

Does it follow, then, that rational agents will always be "moral"? Does it mean that the most rational choice for maximizing any set of preferences, is also in line with "morality"?

That would put morality into decision theory, which would be kind of nice.

But I can't think how an agent whose utility function simply read "Commit Murder" could possibly make a choice that was both moral (the way morality is traditionally defined) and rational.