Peterdjones comments on Logical Pinpointing - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (338)
Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).
In my view there's a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn't relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.
It's nothing about justifyng "myself"
My computer won't load the website because it's apparently having issues with flash, can you please summarize? If you're just making a distinction between yourself and your beliefs, sure, I'll concede that. I was a bit sloppy with my terminology there.
Its not "My beliefs" either." justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows."
Okay. I think I've explained the justification then. Specific moral systems aren't necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. "My values tell me X, therefore X is moral" is the form of justification that I've been defending.
Yet again, you run into the problem that you need it to be wrong for other people to murder you, which you can't justify with your values alone.
No I don't. I need to be stronger than the people who want to murder me, or to live in a society that deters murder. If someone wants to murder me, it's probably not the best strategy to start trying to convince them that they're being immoral.
You're making an argumentum ad consequentum. You don't decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards. Just because you don't like the type of system that morality leads to overall doesn't mean that you're justified in ignoring other moral arguments.
The benefit of my system is that it's right for me to murder people if I want to murder them. This means I can do things like self defense or killing Nazis and pedophiles with minimal moral damage. This isn't a reason to support my system, but it is kind of neat.
That's giving up on morality not defending subjective morality.
Same problem. That's either group morality or non morality.
I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.
I am starting wth what kind of morality it would be adequate to have. If you can't bang in a nail with it, it isn't a hammer.
Where on eath did I say that?
That's not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for "i can do whatever I want regardless".
Justifiable self defense is not murder. You seem to have confused ethical objectiv ism (morality is not just personal preference) with ethical absolutism (moral principles have no exceptions). Read yer wikipedia!
Morality is a guide for your own actions, not a guide for getting people to do what you want.
Rational self interested individuals decide to create a police force.
Argumentum ad consequentums are still invalid.
Sure, but morality needs to have motivational force or its useless and stupid. Why should I care? Why should the burglar? If you're going to keep insisting that morality is what's preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.
This is not how metaethics works. You don't get to start with a predefined notion of adequate. That's the opposite of objectivity. By neglecting metaethics, you're defending a model that's just as subjective as mine, except that you don't acknowledge that and you seek to vilify those who don't share your preferences.
You're arguing that subjective morality can't be right because it would lead to conclusions you find undesirable, like random murders.
Stop muddling the debate with unjustified assumptions about what morality is for. If you want to talk about something else, fine. My definition of morality is that morality is what tells individuals what they should and should not do. That's all I intend to talk about.
You've conceded numerous things in this conversation, also. I'm done arguing with you because you're ignoring any point that you find inconvenient to your position and because you haven't shown that you're rational enough to escape your dogma.
Your usage of the words "subjective" and "objective" is confusing.
Utilitarianism doesn't forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize "morality" (total sum utility).
It is "objective" in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also "objective" in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also "subjective" in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don't, but that's a theoretical nitpick).
Utilitarianism alone doesn't apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that's not what it's there for, AFAIK.
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent's weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin's size and specific numbers and uncertainty values will vary by system.
I've never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
It seems to me that these two paragraphs contaradict each other. Do you think the "he should" means something different to "it is right for him to do so"?
No, they don't have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I've said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what's exposed on wikipedia's page on utilitarianism.
Isn't that decision theory?
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
Yeah.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I'm currently convinced that there's at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.