Peterdjones comments on Logical Pinpointing - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (338)
if your moral values aren't objective, why would anyone else be beholden to them? And how could they be moral if they don't regulate others' behaviour?
Why would it change, absent our changing the axioms? Do you think it is part of the universe?
To the first question: Possibly because your moral values arose from a process that was almost exactly the same for other individuals, and such it's reasonable to infer that their moral values might be rather similar than completely alien?
To the second: "And how could they be (blank?) if they don't regulate others' behaviour?", by which I mean, what do you mean by "moral"? What makes a value a "moral" value or not in this context?
I'm not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.
Why describe them as subjecive when they are intersubjective?
It would be necessary for them to be moral values and not something else, like aesthetic values. Because morality is largely to regulate interactions between individuals. That's its job. Aesthetics is there to make things beautiful, logic is there to work things out...
I don't want to get into a discussion of this, but if there's an essay-length-or-less explanation you can point to somewhere of why I ought to believe this, I'd be interested.
I dont see that "morality is largely to regulate interactions between individuals" is contentious. Did you have another job in mind for it?
Well, since you ask: identifying right actions.
But, as I say, I don't want to get into a discussion of this.
I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it's important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.
Is that an end in itself?
Well, the law compells those who arent compelled by exhortation. But laws need justiication.
Not for me, no.
Is regulating interactions between individuals an end in itself?
Do you think it is pointless? Do you think it is a prelude to something else>?
I think identifying right actions can be, among other things, a prelude to acting rightly.
Is regulating interactions between individuals an end in itself?
What does that concept even mean? Are you asking if there's a moral obligation to improve one's own understanding of morality?
The justification for laws can be a combination of pragmatism and the values of the majority.
Does it serve a purpose by itself? Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.
if the values of the majority arent justified, how does thast justify laws?
Also, sometimes it's a prelude to acting rightly and not acting wrongly.
Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don't. I do find it instrumentally valuable.
Laws are justified because subjective desires are inherently justified because they're inherently motivational. Many people reverse the burden of proof, but in the real world it's your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That's the way we're designed and there's no getting around it. I prefer it that way and that's its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.
I actually see that as counter-intuitive.
"Morality" is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think "greater total utility over multiple agents, whose value systems (utility functions) may vary". Morality seems largely about taking actions and making decisions which achieve greater utility.
I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don't inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.
I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they've overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.
Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).
In my view there's a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn't relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.
It's nothing about justifyng "myself"
Your usage of the words "subjective" and "objective" is confusing.
Utilitarianism doesn't forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize "morality" (total sum utility).
It is "objective" in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also "objective" in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also "subjective" in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don't, but that's a theoretical nitpick).
Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for, AFAIK.
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
I share this view. When I appear to forfeit some utility in favor of someone else, it's because I'm actually maximizing my own utility by deriving some from the knowledge that I'm improving the utility of other agents.
Other agents's utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an "innate goodness of humanity" are mind-projecting their own value-of-others'-utility.
Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It's also possible that this is a purely environment-learned value that becomes "terminal" in the process of being trained into the brain's reward centers due to its instrumental value in many situations.
You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.
Morality is a useful tool to regulate interactions between individuals. There are efforts to make it a better tool for that purpose. That does not mean that morality should be used to regulate interactions.
Human artifacts are generally created to do jobs, eg hammers
Tool. Like i said.
Does that mean you have a better tool in mind, or that interaction don't need regulation?
If I put a hammer under a table to keep the table from wobbling, am I using a tool or not? If the hammer is the only object within range that is the right size for the table, and there is no task which requires a weighted lever, is the hammer intended to balance the table simply by virtue of being the best tool for the job?
Fit-for-task is a different quality than purpose. Hammers are useful tools to drive nails, but poor tools for determining what nails should be driven. There are many nails that should not be driven, despite the presence of hammers.
f you can't bang in nails with it, it isnt a hammer. What you can do with it isn't relevant.
???
So we can judge things morally wrong, because we have a tool to do the job, but we shouldn't in many cases, because...? (And what kind of "shouldn't" is that?)
By that, the absence of nails makes the weighted lever not a hammer. I think that hammerness is intrinsic and not based on the presence of nails; likewise morality can exist when there is only one active moral agent.
The metaphor was that you could, in principle, drive nails literally everywhere you can see, including in your brain. Will you agree that one should not drive nails literally everywhere, but only in select locations, using the right type of nail for the right location? If you don't, this part of the conversation is not salvageable.
What is that supposed to be analgous to? If you have a workable system of ethics, then it doens't make judgments willy nilly, anymore than a workable system of logic allows quodlibet.
(Edited for explicit analogy.)
Basically, it's not because you have a morality (hammer) that happens to be convenient for making laws and rules of interactions (balancing the table) that morality is necessarily the best and intended tool for making rules and that morality itself tells you what you should make laws about or that you even should make laws in the first place.
Because they're not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans? And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?
I have warning lights that there's an argument about definitions here.
That would make them not-objective. Subjective and intersubjective remain as options.
Then, again, why would anyone else be beholden to my values?
Because valuing others' subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
If one posits that by working together we can achieve an utopia where each individual's values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others' values, would it not follow that it's in everyone's best interests for everyone to build and follow such models?
The free-loader problem is an obvious downside of the above simplification, but that and other issues don't seem to be part of the present discussion.
That doesn't make them beholden--obligated. They can opt not to play that game. They can opt not to vvalue winning.
Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.
Could you taboo "beholden" in that first? I'm not sure the "feeling of moral duty borned from guilt" I associate with the word "obligated" is quite what you have in mind.
Within context, you cannot opt to not value winning. If you wanted to "not win", and the preferred course of action is to "not win", this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
In other words, you just didn't truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
(sorry if I'm arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent's utility to its maximum possible value if there exists a maximum for that agent's function)
Games emerge where people have things other people value. If someone doens't value those sorts of things, they are not going to game-play.
I don't see where higher-tier functions come in.
You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That's a tall order.
I wouldn't let my values be changed if doing so would thwart my current values. I think you're contending that the utopia would satisfy my current values better than the status quo would, though.
In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don't have very strong ones but I think they're in here somewhere and for some things). You would call this a hidden utility function, I don't think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.
Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.
Do you think the utopia is feasible?