There seems to be some division on this point.
I might be mistaken but I got the feeling that there's not much of a division, the picture I've got of LW on meta-ethics is something along the lines of: values exist in peoples heads, those are real, but if there were no people there wouldn't be any values. Values are to some extent universal, since most people care about similar things, this makes some values behave as if they were objective. If you want to categories - though I don't know what you would get out of it, it's a form of nihilism.
An appropriate question when discussing objective and subjective morality is:
People here seem to share anti-realist sensibilities but then balk at the label and do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.
I'm not saying it can never be reasonable for an anti-realist to do any of those things, but it certainly seems like belief in subjective or non-cognitive morality hasn't filtered all the way through people's beliefs.
I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI
Well that seems like the most dangerous instance of motivated cognition ever.
After reading lots of debates on these topics, I'm no longer sure what the terms mean. Is a paperclip maximizer a "moral nihilist"? If yes, then so am I. Same for no.
I see no reason to think a paperclip maximizer would need to have any particular meta-ethics. There are possible paperclip maximizers that are and one's that aren't. As rule of thumb, an agent's normative ethics, that is, what it cares about, be it human flourishing or paperclips does not logically constrain it's meta-ethical views.
Morality is a human behavior. It is in some ways analogous to trade or language: a structured social behavior that has developed in a way that often approximates particular mathematical patterns.
All of these can be investigated both empirically and intellectually: you can go out and record what people actually do, and draw conclusions from it; or you can reason from first principles about what sorts of patterns are mathematically possible; or both. For instance, you could investigate trade either beginning from the histories of actual markets, or from principles of microeconomics. You could investigate language beginning from linguistic corpora and historical linguistics ("What sorts of language do people actually use? How do they use it?"); or from formal language theory, parsing, generative grammar, etc. ("What sorts of language are possible?")
Some of the intellectual investigation of possible moralities we call "game theory"; others, somewhat less mathematical but more checked against moral intuition, "metaethics".
Asking whether there are universal, objective moral principles is a little like asking whether there are universal, objective p...
In summary of my own current position (and which I keep wanting to make a fuller post thereof):
If factual reality F can represent a function F(M) -> M from moral instructions to moral instructions (e.g. given the fact that burning people hurts them, F("it's wrong to hurt people")-> "It's wrong to burn people"), then there may exist universal moral attractors for our given reality -- these would represent objective moralities that are true for a vast set of different moral starting positions. Much like you reach the Sierpinski T...
...There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is
There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically
I thi...
If Euthyphro's dilemma proves religious morality to be false, it also does the same to evolutionary morality: http://atheistethicist.blogspot.com/2009/02/euthyphro-and-evolutionary-ethics.html
Reading your edit... I believe that there exists some X such that X developed through natural selection, X does not depend on any particular knowledge, X can be investigated scientifically, and for any moral intuition M possessed by a human in the real world, there's a high probability that M depends on X such that if X did not exist, M would not exist either. (Which is not to say that X is the sole cause of M, or that two intuitions M1 and M2 can't both derive from X such that M1 and M2 motivate mutually exclusive judgments in certain real-world situation...
Well, an awful lot of what we think of as morality is dictated, ultimately, by game theory. Which is pretty universal, as far as I can tell. Rational-as-in-winning agents will tend to favor tit-for-tat strategies, from which much of morality can be systematically derived.
Objective morality? Yes, in the sense that game theory is objective. No, in the sense that payoff matrices are subjective.
I believe it is poosible to scientifically determine whether people generally have many and strong reasons to promote or inhibit certain desires through the use of social tools such as praise, condemnation, reward, and punishment. I also believe that this investigation would make sense of a wealth of moral practices such as the three categories of action (obligation, prohibition, and non-obligatory permission), excuse, the four categories of culpability (intentional, knowing, reckless, negligent), supererogatory action, and. - of course - the role of praise, condemnation, reward, and punishment.
I agree with what seems to be the standard viewpoint here: the laws of morality are not written on the fabric of the universe, but human behavior does follow certain trends, and by analyzing these trends we can extract some descriptive rules that could be called morals.
I would find such an analysis interesting, because it'd provide insight into how people work. Personally, though, I'm only interested in what is, and I don't care at all about what "ought to be". In that sense, I suppose I'm a moral nihilist. The LessWrong obsession with develop...
I suspect that there exists an objective morality capable of being investigated, but not using the methods commonly known as science.
What we currently think of as objective knowledge comes from one of two methods:
1) Start with self-evident axioms and apply logical rules of inference. The knowledge obtained from this method is called "mathematics".
2) The method commonly called the "scientific method". Note that thanks to the problem of induction the knowledge obtained using this method can never satisfy method 1's criterion for knowled...
I'd be extremely surprised if there turned out to be some Platonic ideal of a moral system that we can compare against. But it seems fairly clear to me that the moral systems we adopt influence factors which can be objectively investigated, i.e. happiness in individuals (however defined) or stability in societies, and that moral systems can be productively thought of as commensurable with each other along these axes. Since some aspects of our emotional responses are almost certainly innate, it also seems clear to me that the observable qualities of moral...
Until a few days ago I would have said I'm a nihilist, even though a few days ago I didn't know that was the label for someone who didn't believe that moral statements could be objective facts.
Now I would say a hearty "I don't know" and assign almost a 50:50 chance that there are objective moral "ought" statements.
Then in the last few days I was reminded that 1) scientific "objective facts" are generally dependent on unprovable assumptions, like the utiliity of induction, and the idea that what a few electrons did last thu...
Objective morality is like real magic - people who want real magic just aren't satisfied with the magic that's real.
Moral relativism all the way. I mean something by morality, but it might not be exactly the same as what you mean.
Of course, moral relativism doesn't single out anything (like changing other people) that you shouldn't do, contrary to occasional usage - it just means you're doing so for your own reasons.
Nor does it mean that humans can't share pretty much all their algorithms for finding goals, due to a common heritage. And this would make humans capable of remarkable agreement about morality. But to call that an objective morality would be stretching it.
The capability to be scientifically investigated is entirely divorced from the existence or discoverability of the thing scientifically investigated. We can devote billions of years and dollars in searching for things that do not and never did and never will exist. If investigation guaranteed existence, I'd investigate a winning lottery ticket in my wallet twice a day.
"Might is Right" is a morality in which there is no distinction between is and ought.
Natural selection favours some moralities more than other. The ones we see are those that thrive. Moral relativism mostly appears to ignore such effects.
I do not like the word "morality". It is very ambiguously defined. The only version of morality that I even remotely agree with is consequentialism/utilitarianism. I can't find compelling arguments for why other people should think this way though, and I think morality ultimately comes down to people arguing about their different terminal values, which is always pointless.
a creator would be most concerned about the continuing existence of protoplasm over the changing situations it would incur over the millennia. For the non-thinking organisms ecological effects would predominate. Sociological precepts would influence the evolution of apparent morality then idealized by the religious and the philosophical. a scientific study of morality would involve correlations between anticipated historical outcomes and societal values.
So first of all, that's not what Sam Harris means so stop invoking him.
I'm not sure what you're referring to here, but here's my comment explaining how this relates to Sam Harris.
If you are referring to facts about your brain/mind then your account is subjectivist. Nothing about subjectivism says we can't investigate people's moral beliefs scientifically.
I addressed this previously, explaining that I am using 'objective' and 'subjective' in the common sense way of 'mind-independent' or 'mind-dependent' and explained in what specific way I'm doing that (that is, the proper basis of terminal values, and thus the rational basis for moral judgments, are hard-wired facts of reality that exist prior to, and independent of, the rest of our knowledge and cognition - and that the proper basis of terminal values is not something that is invented later, as a product of, and dependent on, later acquired/invented knowledge and chains of cognition). You just went on insisting that I'm using the terminology wrong purely as a matter of the meaning in technical philosophy.
This discussion is getting rather frustrating because I don't think your beliefs are actually wrong. You're just a) refusing to use or learn standard terminology that can be quickly picked up by glancing at the Stanford Encyclopedia of Philosophy and b) thinking that whether or not we can learn about evolved or programmed utility function-like things is a question related to the whether or not moral realism is true. I'm a very typical moral anti-realist but I still think humans have lots of values in common, that there are scientific ways to learn about those values, and that this is a worthy pursuit.
You do not have to demand, as you've been doing throughout this thread, that I only use words to refer to things that you want them to mean, when I am explicitly disclaiming any intimacy with the terms as they are used in technical philosophy and making a real effort to taboo my words in order to explain what I actually mean. Read the article on Better Disagreement and try to respond to what I'm actually saying instead of trying to argue over definitions.
Now it is the case that if you define morality as "whatever that thing in my brain that tells me what is right and wrong says" there is in some sense an "is from which you can get an ought".
Ok, great. That's kind of what I mean, but it's more complicated than that. What I'm referring to here are actual terminal values written down in reality, which is different from 1) our knowledge of what we think our terminal values are, and 2) our instrumental values, rationally derived from (1), and 3) our faculty for moral intuition, which is not necessarily related to any of the above.
To answer your previous question,
Second of all, give an example of what kind of facts you would refer to in order to decide whether or not murder is immoral.
One must, 1) scientifically investigate the nature of their terminal values, 2) rationally derive their instrumental values as a relation between (1) and the context of their current situation, and 3) Either arrive at a general principle or to an answer to the specific instance of murder in question based on (1) and (2), and act accordingly.
But this is not at all what Hume is talking about. Hume is talking about argument and justification. His point is that an argument with only descriptive premises can't take you to a normative conclusion. But note that your "is" potentially differs from individual to individual. I suppose you could use it to justify your own moral beliefs to your self but that does not moral realism make. What you can't do is use it to convince anyone else.
I don't understand why people insist on equating 'objective morality' with something magically universal. We do not have a faculty of divination with which to perceive the Form of the Good existing out there in another dimension. If that's what Hume is arguing against, then his argument is against a straw man as far as I'm concerned. Just because I'm pointing out an idea for an objective morality that differs from individual to individual doesn't make it any less 'objective' or 'real' - unless you're using those terms specifically to mean to some stupid, mystical 'universal morality' - instead of the terms just meaning objective and real in common sense. Trying to find a morality that is universal among all people or all mind designs is impossible (unless you're just looking at stuff like this which could be useful), and if that's what you're doing, or that's what you're taking up a position against, then either you're working on the wrong problem, or you're arguing against a stupid straw man position.
What you can't do is use it to convince anyone else.
For the particular idea I've been putting forward here, people's terminal values relate to one other through the following kinds of ways:
1) Between normal humans there is a lot in common 2) You could theoretically reach into their brain and mess with the hardware in which their terminal values are encoded 3) You can still convince and trade based on instrumental values, of course 4) Humans seem to have terminal values which actually refer to other people, whether it's simply finding value in the perception of another human's face, various kinds of bonding, pleasurable feelings following acts of altruism, etc.
I don't understand why people insist on equating 'objective morality' with something magically universal.
I would not equate it with anything magical or universal. Certainly people have tried to ground morality in natural facts, though it is another question whether or not any has succeeded. And certainly it is logically possible to have a morality that is objective, but relative, though few find that avenue plausible. What proponents of objective morality (save you) all agree about is that moral facts are made true by things other than people's attitud...
Do you believe in an objective morality capable of being scientifically investigated (a la Sam Harris *or others*), or are you a moral nihilist/relativist? There seems to be some division on this point. I would have thought Less Wrong to be well in the former camp.
Edit: There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris *or others*)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.