I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.

Let's say that all your thoughts either seem factual or fictional.  Memories seem factual, stories seem fictional.  Dreams seem factual, daydreams seem fictional (though they might seem factual if you're a compulsive fantasizer).  Although the things that seem factual match up reasonably well to the things that actually are factional, this isn't the case axiomatically.  If deviating from this pattern is adaptive, evolution will select for it.  This could result in situations like: the rule that pieces move diagonally in checkers seems fictional, while the rule that you can't kill people seems factual, even though they're both just conventions.  (Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it.  But I don't think it's different in kind from the rule that you must move diagonally in checkers.)

I'm not an expert, but it definitely seems as though this could actually be the case.  Humans are fairly conformist social animals, and it seems plausible that evolution would've selected for taking the rules seriously, even if it meant using the fact-processing system for things that were really just conventions.

Another spin on this: We could see philosophy as the discipline of measuring, collating, and making internally consistent our intuitions on various philosophical issues.  Katja Grace has suggested that the measurement of philosophical intuitions may be corrupted by the desire to signal on the part of the philosophy enthusiasts.  Could evolutionary pressure be an additional source of corruption?  Taking this idea even further, what do our intuitions amount to at all aside from a composite of evolved and encultured notions?  If we're talking about a question of fact, one can overcome evolution/enculturation by improving one's model of the world, performing experiments, etc.  (I was encultured to believe in God by my parents.  God didn't drop proverbial bowling balls from the sky when I prayed for them, so I eventually noticed the contradiction in my model and deconverted.  It wasn't trivial--there was a high degree of enculturation to overcome.)  But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it.  Right?

Yes, you can think about your moral intuitions, weigh them against each other, and make them internally consistent.  But this is kind of like trying to add resolution back in to an extremely pixelated photo--just because it's no longer obviously "wrong" doesn't guarantee that it's "right".  And there's the possibility of path-dependence--the parts of the photo you try to improve initially could have a very significant effect on the final product.  Even if you think you're willing to discard your initial philosophical conclusions, there's still the possibility of accidentally destroying your initial intuitional data or enculturing yourself with your early results.

To avoid this possibility of path-dependence, you could carefully document your initial intuitions, pursue lots of different paths to making them consistent in parallel, and maybe even choose a "best match".  But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.

Currently, I disagree with what seems to be the prevailing view on Less Wrong that achieving a Really Good Consistent Match for our morality is Really Darn Important.  I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process.  It's randomness all the way through either way, right?  The main reason "bad" consistent matches are considered so "bad", I suspect, is that they engender cognitive dissonance (e.g. maybe my current ethics says I should hack Osama Bin Laden to death in his sleep with a knife if I get the chance, but this is an extremely bad match for my evolved/encultured intuitions, so I experience a ton of cognitive dissonance actually doing this).  But cognitive dissonance seems to me like just another aversive experience to factor in to my utility calculations.

Now that you've read this, maybe your intuition has changed and you're a moral anti-realist.  But in what sense has your intuition "improved" or become more accurate?

I really have zero expertise on any of this, so if you have relevant links please share them.  But also, who's to say that matters?  In what sense could philosophers have "better" philosophical intuition?  The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).

New Comment
53 comments, sorted by Click to highlight new comments since:

I was surprised to see the high number of moral realists on Less Wrong

Just a guess, but this may be related to the high number of consequentialists. For any given function U to evaluate consequences (e.g. a utility function) there are facts about which actions maximize that function. Since what a consequentialist thinks of as a "right" action is what maximizes some corresponding U, there are (in the consequentialist's eyes) moral facts about what are the "right" actions.

Similar logic applies to rule consequentialism by the way (there may well be facts of the matter about which moral rules would maximize the utility function if generally adopted).

That may be true, but I don't think that accounts for what is meant by "moral realism". Yes, it's a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone -- as in you are morally wrong if you have a different utility function.

I think this is more the distinction between "objectivism" and "subjectivism", rather than between "realism" and "anti-realism".

Let's suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just "good" (simpliciter) rather than as "good for me" or as "good from my point of view". Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U' as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.

In that case, we have a form of moral "subjectivism" - basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral "realism" as well.

Agree though that the definitions aren't precise, and many people equate "objectivism" with "realism".

Two books on evolutionary selection for moral realism:

A good article on the structure of evolutionary debunking arguments in ethics (sorry, gated):

http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0068.2010.00770.x/abstract

But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.

But your initial mix of evolved and encultured values are all you have to go on. There is no other source of values or intuitions. Even if you decide that you disagree with a value, you're using other evolved or encultured intuitions to decide this. There is literally nothing can use except these. A person who abandons their religious faith after some thought is using the value "rational thought" against "religious belief." This person was lucky enough to have "rational thought" instilled by someone as a value, and have it be strong enough to beat "religious belief." The only way to change your value system is by using your value system to reflect upon your value system.

[-][anonymous]120

The only way to change your value system is by using your value system to reflect upon your value system.

I agree with the message of your post and I up-voted it, but this sentence isn't technically true. Outside forces that aren't dependant on your value system can change your value system too. For example if you acquire a particular behaviour altering parasite or ingest substances that alter your hormone mix. This is ignoring things like you losing your memory or Omega deciding to rewire your

Our values are fragile, some see this as a reason to not be too concerned with them. I find this a rationalization similar to the ones use to deal with the fragility of life itself. Value deathism has parallel arguments to deathism.

What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.

What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.

You totally stole that from me!

[-][anonymous]10

Yeah I totally did, it fit my previous thinking (was very into Nietzsche a few years back too) and I've been building on it since.

Since this is I think the second time you've made a comment like this I'm wondering why exactly you feel the need to point this out. I mean surely you realize you've stolen stuff from me too right? And we both stole loads from a whole bunch of other people. Is this kind of like a bonding fist bump of a call for me to name drop you more?

Those who read our public exchanges know we are on good terms and that I like your stuff, not sure what more name dropping would do for you beyond that, especially since this is material from our private email exchanges and not a public article I can link to. If I recall the exchange the idea was inspired by a one line reply you made in a long conversation, so its not exactly something easily quotable either.

What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.

Is this why people like Nietzsche, or do most people who like Nietzsche have different reasons?

Our values are fragile, some see this as a reason to not be too concerned with them.

I think it really depends on the exact value change we're talking about. There's an analogue for death/aging--you'd probably greatly prefer aging another 10 years, then being frozen at that biological age forever, over aging and dying normally. In the same way, I might not consider a small drift in apparently unimportant values to big a deal in the grand scheme of things, and might not choose to spend resources guarding against this (slippery slope scenarios aside).

In practice, people don't seem to be that concerned with guarding against small value changes. They do things like travel to new places, make new friends, read books, change religions, etc., all of which are likely to change what they value, often in unpredictable ways.

But your initial mix of evolved and encultured values are all you have to go on.

I don't think this statement is expressing a factual question. If it is, hopefully "I could generate values randomly" is a workable counterargument.

It's also not even clear quite what you mean by "initial" mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the "initial" one that is "all I have to go on"?

Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?

Abdul grew up in an extremist Pakistani village, but on reflection, he's against honor killings. Bruce grew up in England, but on reflection, he's in favor of honor killings. What do you say to each?

I think most LW readers don't see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every "bad" thing anyone has done ever, if we're to believe that everyone's the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones "evolved" values would be considered a scary starting point, I suspect.

Infuriation and "righteous anger" are evolved intuitions; I assume most of us are past the point of endorsing "righteous anger" as being righteous/moral.

A person who abandons their religious faith after some thought is using the value "rational thought" against "religious belief." This person was lucky enough to have "rational thought" instilled by someone as a value, and have it be strong enough to beat "religious belief."

Do you consider God's existence to be an "is" factual question or an "ought" values question? I consider it a factual question myself.

I think most LW readers don't see much sacrosanct about evolved values

Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.

Let's try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for... maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values... and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.

What is your reaction on this? Some values were replaced by some other values -- thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.

How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between "human hell" and "human paradise" scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?

(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)

How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments?

It's only "wrong" in the sense that I don't want it, i.e. it doesn't accord with my values. I don't see the need to mention the fact that they may have been affected by evolution.

It's also not even clear quite what you mean by "initial" mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set >of values is the "initial" one that is "all I have to go on"?

Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one's current value system to do so. The human value system is dynamic.

Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them?

Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my "rational thought" value, to be safe and not dogma.

Do you consider God's existence to be an "is" factual question or an "ought" values question? I consider it a factual question myself.

My "rational thought" value tells me it's an "is" question, but most people seem to consider it a value question.

[-][anonymous]00

If it is, hopefully "I could generate values randomly" is a workable counterargument.

But why would you do that if your existing value system wouldn't find that a good idea?

I wouldn't do that. You misunderstood my response. I said that was my response if he was trying to make an empirical assertion.

Here's my take:

The problem with talking about "objective" and "subjective" with respect to ethics (the terms that "realist" and "anti-realist" often get unpacked to) is that they mean different things to people with different intuitions. I don't think there actually is a "what most philosophers mean by the term" for them.

"Objective" either means:

  1. not subjective, or
  2. It exists regardless of whether you believe it exists

"Subjective" either means:

  1. It is different for different people, or
  2. not objective

So, some people go with definition 1, and some go with definition 2. Very few people go with both Objective[2] and Subjective[1] and recognize that they're not negations of one another.

So you have folks who think that different people have somewhat different utility functions, and therefore morality is subjective. And you have folks who think that a person's utility function doesn't go away when you stop believing in it, and therefore morality is objective. That they could both be true isn't considered within the realm of possibility, and folks on "both sides" don't realize they're talking past each other.

I don't get why you think facts and conventions are mutually exclusive. Don't you think it's a fact that the American President's name is Barack Obama?

I think it's a fact that there's a widespread convention of referring to him by that name.

I also think it's a fact that there's a widespread taboo against stealing stuff. I don't think it's a fact that stealing stuff is wrong, unless you're using "wrong" as a shorthand to refer to things that have strong/widespread taboos against them. (Once you use the word this way, an argument about whether stealing is wrong becomes an argument over what taboos prevail in the population--not a traditional argument about ethics exactly, is it? So this usage is nonstandard.)

I don't think it's a fact that stealing stuff is wrong, unless you're using "wrong" as a shorthand to refer to things that have strong/widespread taboos against them.

But you also said that some such widespread conventions/taboos are good conventions. From your OP:

Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it.

So, here's a meta-question for you. Do you think it is a fact that "the rule that you can't kill people is a very good convention". Or was that just a matter of subjective opinion, which you expressed in the form of a factual claim for rhetorical impact? Or is it itself a convention (i.e. we have conventions to call certain things "good" and "bad" in the same way we have conventions to call certain things "right" and "wrong")?

On a related point, notice that certain conventions do create facts. It is a convention that Obama is called president, but also a fact that he is president. It is a convention that dollar bills can be used as money, and a fact that they are a form of money.

Or imagine arguing the following "It is a convention that objects with flat surfaces and four solid legs supporting them are called tables, but that doesn't mean there are any real tables".

Do you think it is a fact that "the rule that you can't kill people is a very good convention".

It's a fact that it's a good convention for helping to achieve my values. So yeah, "the rule that you can't kill people is a very good convention" is a subjective value claim. I didn't mean to frame it as a factual claim. Any time you see me use the word "good", you can probably interpret as shorthand for "good according to my values".

It is a convention that Obama is called president, but also a fact that he is president.

The "fact" that Obama is president is only social truth. Obama is president because we decided he is. If no one thought Obama was president, he wouldn't be president anymore.

The only sense in which "Obama is president" is a true fact is if it's shorthand for something like "many people think Obama is president and he has de facto power over the executive branch of the US government". (Or you could use it as shorthand for "Obama is president according to the Supreme Court's interpretation of US laws" or something like that, I guess.)

In medieval times, at one point, there were competing popes. If I said "Clement VII is pope", that would be a malformed factual claim, 'cause it's not clear how to interpret the shorthand (what sensory experiences would we expect if the proposition "Clement VII is pope" is true?). In this case, the shorthand reveals its insufficiency, and you realize that a conventional claim like this only becomes a factual claim when it's paired with a group of people that respects the convention ("Clement VII is considered the pope in France" is a better-formed factual claim, as is "Clement VII is considered the pope everywhere". Only the first is true.). Oftentimes the relevant group is implied and not necessary to state ("Obama is considered US president by 99+% of those who have an opinion on the issue").

People do argue over conventional stuff all the time, but these aren't arguments over anticipation ("My pope is legit, yours is not!"). Some moral arguments ("abortion is murder!") follow the same form.

You seem to be overlooking the fact that facts involving contextual language are facts nonetheless.

The "fact" that Obama is president is only social truth. Obama is president because we decided he is. If no one >thought Obama was president, he wouldn't be president anymore.

There is a counterfactual sense in which this holds some weight. I'm not saying agree with your claim, but I would at least have to give it more consideration before I knew what to conclude.

But that simply isn't the case (& it's a fact that it isn't, of course). Obama's (present) presidency is not contested, and it is a fact that he is President of the United States.

You could try to argue against admitting facts involving any vagueness of language, but you would run into two problems: this is more an issue with language than an issue with facts; and you have already admitted facts about other things.

But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it. Right?

Conventionally, it's genes, culture and environment. Most conventional definitions of culture don't cover all environmental influences, just those associated with social learning. However, not all learning is social learning. Some would also question the determinism - and pay homage to stochastic forces.

[-][anonymous]10

I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.

Surprised? I would say disappointed.

Except when dealing contrarian Newsome-like weirdness moral anti-realism doesn't rest on a complicated argument and is basic level sanity in my opinion. While certainly you can construct intellectual hipster positions in its favour, it is not something half the community should disagree with. The reason I think this is that I suspect most of those who are firmly against it don't know or understand the arguments for it or they are using "moral realism" in a way that is different from how philosophers use it.

Most of the LWers who voted for moral realism probably believe that Eliezer's position about morality is correct, and he says that morality is subjunctively objective. It definitely fits Wikipedia's definition of moral realism:

Moral realism is the meta-ethical view which claims that:

  • Ethical sentences express propositions.
  • Some such propositions are true.
  • Those propositions are made true by objective features of the world, independent of subjective opinion.

To the best of my understanding, "subjunctively objective" means the same thing that "subjective" means in ordinary speech: dependent on something external, and objective once that something is specified. So Eliezer's morality is objective once you specify that it's his morality (or human morality, etc.) and then propositions about it can be true or false. "Turning a person into paperclips is wrong" is an ethical proposition that is Eliezer-true and Human-true and Paperclipper-false, and Eliezer's "subjunctive objective" view is that we should just call that "true".

I disagree with that approach because this is exactly what is called being "subjective" by most people, and so it's misleading. As if the existing confusion over philosophical word games wasn't bad enough.

"Turning a person into paperclips is wrong" is an ethical proposition that is Eliezer-true and Human-true and >Paperclipper-false, and Eliezer's "subjunctive objective" view is that we should just call that "true".

Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it's important to understand that such a theory does not itself favor one over the other.

It would be like a utilitarian taking into account only his family's moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It's perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it's a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn't really what the theory as a whole is about.

So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It's not simply false just because it's Human-false. (I say this, but I'm unfamiliar with Eliezer's position. If he's biased toward Human-[x] statements, I'd have to disagree.)

[-]Furcas-40

I disagree with that approach because this is exactly what is called being "subjective" by most people

Those same people are badly confused, because they usually believe that if ethical propositions are "subjective", it means that the choice between them is arbitrary. This is an incoherent belief. Ethical propositions don't become objective once you specify the agent's values; they were always objective, because we can't even think about an ethical proposition without reference to some set of values. Ethical propositions and values are logically glued together, like theorems and axioms.

You could say that the concept of something being subjective is itself a confusion, and that all propositions are objective.

That said, I share your disdain for philosophical word games. Personally, I think we should do away with words like 'moral' and 'good', and instead only talk about desires and their consequences.

This is why I voted for moral realism. If instead Moral realism is supposed to mean something stronger, then I'm probably not a moral realist.

moral anti-realism doesn't rest on a complicated argument

I've not studied the arguments of moral anti-realism, but if I had to make a guess it would be that moral anti-realism probably rests on how you can't extract "ought" statements from "is" statements.

But since "is" statements can be considered as functions operating on "ought" values (e.g. the is-statement "burning people causes them pain", would produce from an ought-statement "you oughtn't cause pain to people" the more specific ought-statement "you oughtn't burn people alive"), the possibility remains open that there can exist universal moral attractive fixed sets, deriving entirely from such "is" transformations, regardless of the opening person-specific or species-specific moral set, much like any starting shape that follows a specific set of transformations will become the Sierpinski triangle.

A possible example for a morally "real" position might e.g. be "You oughtn't decrease everyone's utility in the universe." or "You oughtn't do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do."

Baby-eaters and SuperHappies and Humans may not be in agreement about what is best, but all three of them could come up with some ideas about things which would be awful for all of them... I don't think that this need change, no matter how many species with moral instict one adds to the mix. So I "leaned" towards moral realism.

Of course, if all the above has nothing to do with what moral realism and moral anti-realism mean... oops.

the possibility remains open that there can exist universal moral attractive fixed sets, deriving entirely from such "is" transformations, regardless of the opening person-specific or species-specific moral set,

So you've got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don't exist? Kill them because they are different and then pretend they never existed or couldn't exist? In my opinion, what you have as a fact is that 99.999% of humans agree X is wrong and .001% don't. The question of moral realism is not a factual one, it is a question of choice: do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don't, or do you CHOOSE to believe that the facts are that the various intuitions have prevalences, some higher than others, some very high indeed, and that's all you actually KNOW.

I effectively feel bound by a lot of my moral intuitions, that is more or less a fact. As near as I can tell, my moral intuitions evolved as part of the social development of animals, then mammals, then primates, then homo. It is rational to assume that the mix of moral intuitions is fairly fine-tuned to optimize the social contribution to our species fitness, and it is more or less a condensation of facts to say that the social contribution to our fitness is larger than the social contribution to any other species on the planet to their fitness.

So I accept that human moral intuition is an organ like the brain or the islets of langerhans. I accept that a fair amount can be said about how the islets of langerhans function, and how the brain functions, when things are going well. Also, we know a lot about how the islets of langerhans and how the brain function when things are apparently not going so well, diseases one might say. I'd even go so far as to say I would prefer to live in a society dominated by people without diabetes and who are not sociopaths (people who seem to lack many common moral intuitions). I'd go so far as to say I would support policies including killing sociopaths and their minions, and including spending only a finite amount of resources on more expensive non-killing ways of dealing with sociopaths and diabetics.

But it is hard for me to accept that it is rational to fall in to the system instead of seeing it from outside. For me to conclude that my moral intuitions are objectively real like the charge on an electron of the electronic properties of doped silicon is projection, seems to me. It is identical to my concluding that one mammal is beautiful and sexy and another is dull, when it is really the triggering of an evolved sexual mechanism in me that paints the one mammal one way and the other the more boring way. If it is more accurate to understand that the fact that I am attracted to one mammal is not because she is objectively more beautiful than another, then it is more accurate to say that the fact that I have a moral intuition is not because I am plugged in to some moral fact of the universe, and not because of an evolved reaction I have. The fact that most men or many men find woman A beautiful and woman B to be blah doesn't mean that all men ought to find A beautiful and B blah, any more than the fact that many (modern) men feel slavery is wrong means they are not projecting their social construct into a realm of fact which could fruitfully be held to a higher standard.

Indeed, the fact that believing that our social constructs, our political truths, are REAL truths is clearly adaptive in the social species. Societies that encourage strong identifications with the values of the society are robust. Societies in which it is right to kill the apostates because they are wrong, evil, have a staying power. But my life as a scientist has consisted of my understanding that my wanting something to be true is not ANY evidence for its truth. I bring that to my American humanity. So even though I will support the killing of our enemies, I don't think that it is a FACT that it is right to kill the enemies of America any more than it is a FACT that it is right to kill the enemies of Islam.

So you've got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don't exist?

What does agreement have to do with anything? Anyway such moral attractive sets either include an injuction of what to do with people that disagree with them or they don't. And even if they do have such moral injuctions, it still doesn't mean that my preferences would necessarily be to follow said injuctions.

People aren't physically forced to follow their moral intuitions now, and they aren't physically forced to follow a universal moral attractive set either.

The question of moral realism is not a factual one

That's what a non moral-realist would say, definitely.

do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don't

What does 'declaring' have to do with anything? For all I know this moral attractive set would contain an injuction against people declaring it true or binding. Or it might contain an injuction in favour of such declarations of course.

I don't think you understood the concepts I was trying to communicate. I suggest you tone down on the outrage.

Moral realism is NOT the idea that you can derive moral imperatives from a mixture of moral imperatives and other non-moral assumptions. Moral realism is NOT the idea that if you study humans you can describe "conventional morality," make extensive lists of things that humans tend, sometimes overwhelmingly, to consider wrong.

Moral realism IS the idea that there are things that are actually wrong.

If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.

An empirical determination of what are the moral rules of many societies, or most societies, or the moral rules that all societies so far have had in common is NOT an instantiation of a moral realist theory, UNLESS you assert that the rules you are learning about are real, that it is in fact immoral or evil to break them. If you meant something wildly different by "moral attractive sets" than what is incorporated by the idea of where people tend to come down on morality, then please elucidate, otherwise I think for the most part i am working pretty consistently with the attractive set idea in saying these things.

If you think you can be a "moral realist" without agreeing that it is immoral to break or not follow a moral truth, then we are just talking past each other and we might as well stop.

Moral realism IS the idea that there are things that are actually wrong.

Okay, yes. I agree with that statement.

If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.

Well, I guess we can indeed define an "immoral" person as someone who does morally wrong things; though a more useful definition would probably be to define an immoral person as someone who does them more so than average. So?

If you think you can be a "moral realist" without agreeing that it is immoral to break or not follow a moral truth

It's reasonable to define an action as "immoral" if it breaks or doesn't follow a moral truth.

But how in the word are you connecting these definitions to all your earlier implications about pretending dissenters don't exist, or killing them and then pretending they never existed in the first place?

Fine, lots of people do immoral things. Lots of people are immoral. How does this "is" statement by itself, indicate anything about whether we ought ignore said people, execute them, or hug and kiss them? It doesn't say anything about how we should treat immoral people, or how we should respond to the immoral actions of others.

I'm the moral realist here, but it's you who seem to be deriving specific "ought" statements from my "is" statements.

The reason I think this is that I suspect most of those who are firmly against it don't know or understand the arguments for it or they are using "moral realism" in a way that is different from how philosophers use it.

This is pretty likely. I spent about a minute trying to determine what the words were actually supposed to mean, then decided that it was pointless, gave up, and refrained from voting on that question. (I did this for a few questions, though I did vote on some, then gave up on the poll.)

I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right?

I think this statement is the fulcrum of my disagreement with your argument. You assert that "it's randomness all the way through either way". I disagree; it's not randomness all the way, not at all.

Evolution's mutations and changes are random; evolutions adaptions are not random - they happen in response to the outside world. Furthermore, the mutations and changes that survive aren't random either: they all meet the same criteria, that they didn't hamper survival.

I believe, then, that developing an internally consistent moral framework can be aided by recognizing the forces that have shaped our intuitions, and deciding whether the direction those forces are taking us is a worthy destination. We don't have to be blind and dumb slaves to Evolution any more. Not really.

[-][anonymous]00

In what sense could philosophers have "better" philosophical intuition? The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).

The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.

I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right?

I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn't mean you shouldn't consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result. This probably wouldn't happen if you did this with the process used to try and establish say the value of the gravitational constant or the charge of an electron.

This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light. It isn't a fact about the universe it is a fact about particular agents or pseudo-agents.

Of course the pebble sorters or babyeaters or paper-clip maximizing AIs can figure out we have an aversion to snakes and crave salty and sugary food. But them learning this would not result in them sharing our normative judgements except for instrumental purposes in some very constrained scenarios where they are optimal for a wide range of goals.

I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn't mean you shouldn't consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result.

What is this "moral anti-realist argument"? Every argument against moral realism I've seen boils down to: "there are on universally compelling moral arguments, therefore morality is not objective". Well, as the linked article points out, there are no universally compelling physical arguments either.

This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light.

The difference between morality and taste in food is that I'm ok with you believing that chocolate is tasty even if I don't, but I'm not ok with you believing that it's moral to eat babies.

The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.

Interesting point, but where's the problem?

I fear many readers will confuse this argument for the moral anti-realist argument.

Yep, I kind of wandered around.

I think I agree with the rest of your comment.

[-][anonymous]00

Interesting point, but where's the problem?

Reading philosophy as an aid in moral judgement.