Peterdjones comments on Logical Pinpointing - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (338)
I dont see that "morality is largely to regulate interactions between individuals" is contentious. Did you have another job in mind for it?
Well, since you ask: identifying right actions.
But, as I say, I don't want to get into a discussion of this.
I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it's important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.
Is that an end in itself?
Well, the law compells those who arent compelled by exhortation. But laws need justiication.
Not for me, no.
Is regulating interactions between individuals an end in itself?
Do you think it is pointless? Do you think it is a prelude to something else>?
I think identifying right actions can be, among other things, a prelude to acting rightly.
Is regulating interactions between individuals an end in itself?
What does that concept even mean? Are you asking if there's a moral obligation to improve one's own understanding of morality?
The justification for laws can be a combination of pragmatism and the values of the majority.
Does it serve a purpose by itself? Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.
if the values of the majority arent justified, how does thast justify laws?
Also, sometimes it's a prelude to acting rightly and not acting wrongly.
Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don't. I do find it instrumentally valuable.
Laws are justified because subjective desires are inherently justified because they're inherently motivational. Many people reverse the burden of proof, but in the real world it's your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That's the way we're designed and there's no getting around it. I prefer it that way and that's its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.
What you need to justfy is imprisoning someone for offending against values they don't necessarily subsribe to. That you are motivated by your values, and the criminal by theirs, doens't give you the right to jail them.
Clarification: From the perspective of a minority, the laws are unjustified. Or, they're justified, but still undesirable. I'm not sure which. Justification is an awkward paradigm to work within because you haven't proven that the concept makes sense and you haven't precisely defined the concept.
Proof is a strong form of justification. If i don;t have justification, you don''t have proof.
Why would the majority regard them as justifed just because they happen to have them?
They're justified in that there are no arguments which adequately refute them, and that they're motivated to take those actions. There are no arguments which can refute one's motivations because facts can only influence values via values. Motivations are what determine actions taken, not facts. That is why perfectly rational agents with identical knowledge but different values would respond differently to certain data. If a babykiller learned about a baby they would eat it, if I learned about a baby I would give it a hug.
In terms of framing, it might help you understand my perspective if you try not to think of it in terms of past atrocities. Think in terms of something more neutral. The majority wants to make a giant statue out of purple bubblegum, but the minority wants to make a statue out of blue cotton candy, for example.
Well, it's ike proof, but weaker.
Lack of counterargument is not justification, nor is motivation from some possible irraitonal source.
Or the majority want to shoot all left handed people, for example. Majority verdict isn't even close to moral justification.
I actually see that as counter-intuitive.
"Morality" is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think "greater total utility over multiple agents, whose value systems (utility functions) may vary". Morality seems largely about taking actions and making decisions which achieve greater utility.
I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don't inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.
I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they've overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.
Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).
In my view there's a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn't relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.
It's nothing about justifyng "myself"
My computer won't load the website because it's apparently having issues with flash, can you please summarize? If you're just making a distinction between yourself and your beliefs, sure, I'll concede that. I was a bit sloppy with my terminology there.
Its not "My beliefs" either." justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows."
Okay. I think I've explained the justification then. Specific moral systems aren't necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. "My values tell me X, therefore X is moral" is the form of justification that I've been defending.
Yet again, you run into the problem that you need it to be wrong for other people to murder you, which you can't justify with your values alone.
Your usage of the words "subjective" and "objective" is confusing.
Utilitarianism doesn't forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize "morality" (total sum utility).
It is "objective" in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also "objective" in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also "subjective" in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don't, but that's a theoretical nitpick).
Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for, AFAIK.
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent's weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin's size and specific numbers and uncertainty values will vary by system.
I've never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
It seems to me that these two paragraphs contaradict each other. Do you think the "he should" means something different to "it is right for him to do so"?
No, they don't have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I've said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what's exposed on wikipedia's page on utilitarianism.
Isn't that decision theory?
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
Yeah.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I'm currently convinced that there's at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.
I share this view. When I appear to forfeit some utility in favor of someone else, it's because I'm actually maximizing my own utility by deriving some from the knowledge that I'm improving the utility of other agents.
Other agents's utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an "innate goodness of humanity" are mind-projecting their own value-of-others'-utility.
Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It's also possible that this is a purely environment-learned value that becomes "terminal" in the process of being trained into the brain's reward centers due to its instrumental value in many situations.