mwengler comments on [SEQ RERUN] Is Morality Preference? - Less Wrong

2 Post author: MinibearRex 24 June 2012 04:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: mwengler 24 June 2012 05:56:47PM -1 points [-]

Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?

For me to credit 2) (Morality is true), I would need to know that 2) is a statement that is actually distinguishable in the world from 1). Someone tells me electrons attract other electrons, we do a test, turns out to be false, electrons repel other electrons is a true statement beyond preference. Someone else tells me electrons repel each other because they hate each other. Maybe some day we will learn how to talk to electrons, but until then this is not testable, not tested, and so the people who come down on each side of this question are not talking about truth.

Someone tells me Morality has falsifiable truths in it, where is the experimental test? Name a moral proposition and describe the test to determine its falsehood or truthiness. If the proponent of moral truth did this, I missed it an need to be reminded.

If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true. I am happy to label this difference "scientific" or "fact-based" but of course the danger of labels is they carry freight from their pasts. But however you choose to label it, is their a proponent of the existence of moral truth who can propose a test, or will these proponents accept that "moral truth is more like truths about electrons hating each other and less like truths about electrons repelling each other?"

Note that in discussing the proposition "electrons hate each other" I actually proposed a test of it's truth, but pointed out we did not yet know how to do that test. If we say "we will NEVER know how to do that test, its just dopey" are we saying something scientific? Something testable? I THINK not, I think this is an unscientific claim. But maybe some chain of scientific discovery will put us in a place where we can test statements about what will NEVER be knowable. I personally do not know how to do that now, though. So if I hold an opinion that electrons neither hate nor love each other, I hold it as an opinion, knowing it might be true, it might be false, and/or it might be meaningless in the real world.

So then what of Moral "Truths?" For the moment, at my state of knowledge, they are like statements about the preferences of electrons. Maybe there are moral truths but I don't know how to learn any of them as facts and I am not aware of anyone who has presented a moral truth and a test for its truthiness. Maybe some day...

But in the meantime, everybody who tells me there are moral truths and especially anybody who tells me "X is one of those moral truths" gets tossed in the dustbin labeled "people who don't know the difference between opinion and truth." Is murder wrong, is that a fact? If by murder you mean killing people, you cannot find a successful major civilization that has EVER appeared to believe that. Self-defense, protection of those labeled "innocent," are observed to justify homicide in societies that I am aware of.

But suppose by murder we mean "unjustifiable homicide? Well then you are either in tautology land (murder is defined as killing which is wrong) or you have kicked the can down the road to a discussion of what justifies homicide, and now you need to propose tests of your hypotheses about what justifies homicide.

So even if there is "moral truth," if you can't propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.

Comment author: [deleted] 24 June 2012 11:18:11PM 3 points [-]

G.E. Moore is famous for this argument against external world skepticism: "How do I know I have hands?" (he raises his hands in front of his face) "Here! Here are my hands!". His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.

I think something similar might apply here. To say that morality is 'objective' or 'subjective' may be an equivocation or category mistake, but if I understand anything, I understand that slavery is wrong. I can't falsify this, or reduce it to some more basic principle because there is nothing more basic, and no possible world in which slavery is right. A world in which the alternative is true cannot be tested for because it is wholly inconceivable.

Comment author: gwern 25 June 2012 02:06:45AM 2 points [-]

His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.

Further reading: http://en.wikipedia.org/wiki/Here_is_a_hand#Logical_form http://www.overcomingbias.com/2008/01/knowing-your-ar.html http://www.gwern.net/Prediction%20markets#fn41

Comment author: mwengler 25 June 2012 12:07:18AM -1 points [-]

A world in which the alternative is true cannot be tested for because it is wholly inconceivable.

A world in which I do not have hands is totally conceivable and easily tested for. So it would appear you have at least this gigantic difference between "Slavery is wrong" differs from your G.E. Moore analogy source statement.

To say that it is obvious that "slavery is wrong" does not rule out this being a statement of preference, does it? I would rather be slowly sucked to orgasm by healthy young females than to have my limbs and torso crushed and ground while immersed in a strong acid. This is AT LEAST as obvious to me as "slavery is wrong" is obvious to you, I would bet, yet it is quite explicitly a statement of preference.

Comment author: [deleted] 25 June 2012 01:45:26PM *  2 points [-]

To say that it is obvious that "slavery is wrong" does not rule out this being a statement of preference, does it?

That's a fair point, but I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. Lots of people have chosen painful deaths over long and pleasant lives and we've rightly praised them for it. So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.

A world in which I do not have hands is totally conceivable and easily tested for.

That wasn't quite the point. The analogue here wouldn't be between the hands and the moral principle. The analogue is this: how surely do you know this epistemic rule about falsification? Do you know it more surely than you know that slavery is wrong? I for one, am vastly more sure that slavery is wrong than I am that instrumentalism or falsificationism is the correct epistemic theory.

I may be misguided, of course, so I won't say that instrumentalist epistemology can't in principle call my moral idea into question. But it seems absurd to assume that is does.

Comment author: TheOtherDave 25 June 2012 02:04:43PM 1 point [-]

I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. [..] So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.

I understand this to imply that you cannot imagine a world in which you prefer to send someone into slavery than not do so, no matter what the context. Have I understood that correctly?

Comment author: [deleted] 25 June 2012 04:36:14PM 1 point [-]

Have I understood that correctly?

No, I can easily imagine a world in which I prefer to send someone into slavery than drink a drop of lemon juice: all I have to do is imagine that I'm a bad person. My point was that it's easy to imagine any world in which my preferences are different, but I cannot imagine a world in which slavery is morally permissible (at least not without radically changing what slavery means).

Comment author: mwengler 25 June 2012 06:21:18PM 1 point [-]

How about a world in which by sending one person from your planet into slavery you defer the enslavement of the entire earth for 140 years? A world in which alien invaders which outgun us more than Europeans outgunned the tribes in Africa from which many of them took slaves, but who are willing for some reason we can't comprehend to take one person you pick back to the home planet, 70 light years away. But failing your making that choice, they will stay here and at some expense to themselves enslave our entire race and planet?

Can you now imagine a world in which your sending someone in to slavery is not immoral? If so, how does this change in what you can and cannot imagine change your opinion of either the imagination standard or slavery's moral status?

It seems to me most likely source of emotions, feelings, is evolution. We aren't just evolved to run from a sabre tooth tiger, we have a rush of overwhelming fear as the instrumentality of our fleeing effectively. SImilarly, we have evolved, mammals as a whole, not just humans not even just primates, to be "social" animals meaning a tremendously important part of the environment was our group of other mammals. Long before we made the argument that slavery was wrong, we had strong feelings of wanting to resist the things that went along with being enslaved, while apparently we also had power feelings that assisted us in forcing others to do what we wanted.

Given the way emotions probably evolved, I think it does make sense to look to our emotions to guide us in knowing what strategies probably work better than others in interacting with our environment, but it doesn't make sense to expect them to guide us correctly in corner cases, in rare situations in which there would have not been enough pay-off to have evolution do any fine tuning of emotional responses.

Comment author: TheOtherDave 25 June 2012 04:44:25PM 0 points [-]

Ah, OK. Thanks for the clarification.

Can you imagine a world in which killing people is morally permissible?

Comment author: [deleted] 25 June 2012 05:10:32PM 1 point [-]

Can you imagine a world in which killing people is morally permissible?

Sure, I live in one. I chose slavery because it's a pretty unequivocal case of moral badness, while killing is not such as in war, self-defense, execution, etc. I think probably rape, and certainly lying are things which are always morally wrong (I don't think this entails that one should never do them, however).

My thought is just that at least at the core of them, moral beliefs aren't subject to having been otherwise. I guess this is true of beliefs about logic too, though maybe not for the same reasons. And this doesn't make either kind of belief immune to error, of course.

Comment author: TheOtherDave 25 June 2012 05:27:39PM 0 points [-]

OK. Thanks for clarifying.

Comment author: mwengler 25 June 2012 05:58:46PM 0 points [-]

I think your point about falsification is a good one. I in fact believe in falisifiability in some powerful sense of the word believe. I suspect a positive belief in falsifiability is at least weakly falsifiable. With time and resources one could look for correlations between belief in falsifiability and various forms of creativity and understanding. I would expect to find it highly correlated with engineering, scientific, and mathematical understanding and progress.

Of course "proving" falisifiabilty by using falisifiability is circular. In my own mind I fall back on instrumentalism: I claim I'm interested in learning falisifiable things about the world and don't care whether we call them "true" or not and don't care whether you call other non-falsifiable statements true or not, I'm interested in falsifiable ones. Behind or above that belief is my belief that I really want power, I want to be able to do things, and that it is the falsifiable statements only that allow me to manipulate the environment effectively: since non-falsifiable statements almost by definition don't help me in manipulating the world in which I would be trying to falsify them.

Is a statement like "Slavery is wrong" falsifiable? Or even "Enslaving this particular child in this particular circumstance"? I think they are not "nakedly" falsifiable and in fact have zero problem imagining a world in which at least some people do not think they are wrong (we live in that world). I think the statement "Slavery is wrong because it reduces average happiness" is falsifiable. "Slavery is wrong because it misallocates human resources" is falsifiable. These reflect instrumentalist THEORIES of morality, theories which it does not seem to be could be falsifiable.

So I have an assumption of falsifiability. You may have an assumption of what is moral. I admit the symmetry.

I can tell you the "I can't imagine it" test fails in epic fashion in science. One of the great thrills of special relativity and quantum mechanics is that they are so wildly non-intuitive for humans, and yet they are so powerfully instrumentally true in understanding absolute reams of phenomenon allowing us to correctly design communications satellites and transistors to name just two useful instrumentalities. So I suppose my belief against " I can't imagine it" as a useful way to learn the truth is a not-necessarily-logical extension of a powerful truth from one domain that I respect powerfully in to other domains.

Further, I CAN imagine a world in which slavery is moral. I can go two ways to imagine this: 1) mostly we don't mind enslaving those who are not "people." Are herds of cattle for food immoral? Is it unimaginable that they are moral? Well if you can't imagine that is moral, what about cultivated fields of wheat? Human life in human bodies ends if we stop exploiting other life forms for nutritition. Sure, you can "draw the line" at chordates for whether cultivating a crop is "slavery" or not. Other people have drawn the line at clan members, family members, nation members, skin-color members. I'm sure there were many white slave holders in the southern U.S. who could not imagine a world in which enslaving white people was moral. Or enslaving British people. Or enslaving British aristocracy. So how far do you go to be sure you are not enslaving anything that shouldn't be enslaved? Or do you trust your imagination that it is only people (or only chordates), even as you realize how powerfully other people's imaginations have failed in the past?

I also reject all religious truth based on passed down stories of direct revelations from god. Again, this kind of belief fails epically in doing science, and I extend its failure there in to domains where perhaps it is not so easy to show it fails. And in my instrumentalist soul, I ultimately don't care whether I am "right" or "wrong," I would just rather use my limited time, energy, and brain-FLOPs pursuing falsifiable truths, and hope fort he best.

Comment author: shminux 24 June 2012 10:05:18PM -1 points [-]

Someone tells me Morality has falsifiable truths in it, where is the experimental test?

You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY's realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam's razor.

If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true.

Replace "moral truth" with "many worlds", and you get the EY's understanding of QM.

Comment author: TimS 25 June 2012 01:59:09PM *  0 points [-]

How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism - but instrumentalism is not inherently opposed to physical realism.

Comment author: shminux 25 June 2012 05:03:25PM -1 points [-]

Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.

Comment author: TimS 25 June 2012 05:43:57PM 1 point [-]

I don't understand. Can you give an example?

Comment author: shminux 25 June 2012 07:53:37PM *  -1 points [-]

A realist finds is perfectly OK to argue which of the many identical maps is "truer" to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone's satisfaction.

Comment author: TimS 25 June 2012 08:03:52PM *  1 point [-]

I'm objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That's not necessarily the position of the instrumentalist.

Comment author: shminux 25 June 2012 08:14:41PM *  -1 points [-]

An anti-realist says there is no territory. That's not necessarily the position of the instrumentalist.

Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.

I'm objecting to your exclusion of instrumentalism from the realist label.

Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don't really care what label you assign to each position.

Comment author: TimS 25 June 2012 08:42:32PM 2 points [-]

Respectfully, you were the one invoking technical jargon to do some analytical work.

Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions - accurately describing reality is harder.

You suggest there is unresolvable tension between those positions.

Comment author: shminux 25 June 2012 09:49:42PM *  -2 points [-]

I think there is physical reality external to human minds.

It's a useful model, yes.

I think that the best science can do is make better predictions - accurately describing reality is harder.

The assumption that "accurately describing reality" is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.

You suggest there is unresolvable tension between those positions.

Yes, one of them postulates something that cannot be tested. if you are into Occam's razor, that's something that fails it.

Comment author: mwengler 25 June 2012 03:33:11AM 0 points [-]

Concerns with confusing the map with the territory are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?

Comment author: shminux 25 June 2012 07:09:13AM *  -2 points [-]

The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam's razor!

Comment author: mwengler 25 June 2012 06:27:11PM 0 points [-]

Occam's razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are "real" or "emulations," only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.

Do not spend a lot of time filling in the details of unreachable lands on your map.

Comment author: shminux 25 June 2012 07:50:42PM -1 points [-]

Do not spend a lot of time filling in the details of unreachable lands on your map.

Yep. Also, do not argue which of the many identical maps is better.

Comment author: mwengler 24 June 2012 11:56:19PM 0 points [-]

If you accept as "true" some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it." I'd be surprised if given those two categories there would be many people who wouldn't elevate the testable statements above the untestable one in "truthiness."

Comment author: TheOtherDave 25 June 2012 01:07:58AM 1 point [-]

Is this different from having higher confidence in statements for which I have more evidence?

Comment author: mwengler 25 June 2012 03:43:21AM 0 points [-]

For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don't know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the "argument" over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.

I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn't. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.

But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.

Comment author: shminux 25 June 2012 12:01:42AM -1 points [-]

We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it."

The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.

Comment author: Eugine_Nier 25 June 2012 07:24:52AM -2 points [-]

The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.

A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.

Comment author: Eugine_Nier 25 June 2012 06:39:11AM *  0 points [-]

We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it." I'd be surprised if given those two categories there would be many people who wouldn't elevate the testable statements above the untestable one in "truthiness."

Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can't tested and even for the ones that can be tested the proof is generally considered better evidence than the test.

In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.

Comment author: TimS 25 June 2012 02:03:50PM 1 point [-]

There's another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.

Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths ("i-can-prove-it"), and "truth-and-i-can't-prove-it."

Generally, this categorization scheme will put most contentious moral assertions into the third category.

Comment author: Eugine_Nier 26 June 2012 06:11:16AM 1 point [-]

Agreed except for your non-conventional use of the word "prove" which is normal restricted to things in the first category.

Comment author: buybuydandavis 24 June 2012 10:14:00PM 0 points [-]

Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?

The trick Objective moralists play is to set truth against preference, when what you have are truths about preferences. Is it true, for you, that ice cream is yummy? For me, it is. That doesn't make it any less a preference.

Comment author: pragmatist 25 June 2012 02:29:18AM *  -2 points [-]

Here's a simple example of a moral claim being tested and falsified:

A: What are you doing with that gun?

B: I'm shooting at this barrel. It's a lot of fun.

A: What? Don't do that! It's wrong.

B: No, it's not. There's nothing wrong with shooting at a barrel.

A: But there's a child inside that barrel! You could kill her.

B: You don't know what you're talking about. That barrel's empty. Go look.

A: [looks in the barrel] Oh, you're right. Sorry about that.

So here A made a moral claim with which B disagreed ("It's wrong to shoot at that barrel."). B proposed a test of the moral claim. A performed the test and the moral claim was falsified.

Now, I anticipate a number of objections to the adequacy of this example. I think they can all be answered, but instead of trying to predict how you will object or tediously listing all the objections I can think of, I'll just wait for you to object (if you so desire) before responding.

So even if there is "moral truth," if you can't propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.

I'm already part of this cadre. I know that electrons do not hate each other.

Comment author: mwengler 25 June 2012 03:21:06AM 1 point [-]

Without the assumption that shooting a child is immoral, this is not a moral argument. With that as an assumption, the moral component of the conclusion is assumed, not proven.

Find me the proof that shooting a child is immoral and we will be off to a good start.

Comment author: pragmatist 25 June 2012 04:39:18AM *  2 points [-]

If you're looking for a test of a moral claim that does not rely on any background assumptions about morality, then I agree that I can't give you an example. But that's because your standard is way too high. When we test scientific hypotheses, the evidence is always interpreted in the context of background assumptions. If it's kosher for scientific experiments to assume certain scientific facts (as they must), then why isn't it kosher for moral experiments to assume certain moral facts?

Consider the analog of your position in the descriptive case: someone denies that there's any fact of the matter about whether descriptive claims about the external world are true or false. This person says, "Show me how you'd test whether a descriptive claim is true or false." Now you could presumably give all sorts of examples of such tests, but all of these examples will assume the truth of a host of other descriptive claims (minimally, that the experimental apparatus actually exists and the whole experiment isn't a hallucination). If your interlocutor insisted that you give an example of a test that does not itself assume the truth of any descriptive claim, you would not be able to satisfy him.

So why demand that I must give an example of a test of a moral claim that does not assume the truth of any other moral claim? Why does the moral realist have this extra justificatory burden that the scientific realist does not? It's fine to have problems with the specific assumptions being made in any particular experiment, and these can be discussed. Perhaps you think my particular assumptions are flawed for some reason. But if you have a general worry about all moral assumptions then you need to tell me why you don't have a similar worry about non-normative assumptions. If you have a principled reason for this distinction, that would be the real basis of your moral anti-realism, not this business about untestability.

Comment author: TimS 25 June 2012 02:06:22PM *  0 points [-]

You position suggests that one cannot consistently be a physical realist and a moral anti-realist. Is that a fair summary of your position?

Comment author: pragmatist 25 June 2012 05:38:13PM *  1 point [-]

I do think moral and scientific reasoning are far less asymmetric than is usually assumed. But that doesn't mean I think there are no asymmetries at all. Asymmetries exist, and perhaps they can be leveraged into an argument for moral anti-realism that is not also an argument against scientific realism. So I wouldn't say it's inconsistent to be a physical realist and a moral anti-realist. I will say that in my experience most people who hold that combination of positions will, upon interrogation, reveal an unjustified (but not necessarily unjustifiable) double standard in the way they treat moral discourse.

Comment author: TimS 25 June 2012 05:56:41PM 3 points [-]

I don't think it is a double standard. Empiricism admits the Problem of Induction, but says that the problem doesn't justify retreating all the way to Cartesian skepticism. This position is supported by the fact that science makes good predictions - I would find the regularity of my sensory experiences surprising if physical realism were false. Plus, the principle of falsification (i.e. making beliefs pay rent) tells us what sorts of statements are worth paying attention to.

Moral reasoning seems to lack ay equivalent for either falsification or prediction. I don't know what it means to try to falsify a statement like "Killings in these circumstances are not morally permissible." And to the extent that predictions can be made based on the statement, they seem either false or historically contingent - it's pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.

In short, the problem of induction in empiricism seems very parallel to the is/ought problem in moral philosophy. But moral philosophy seems to lack the equivalent of practical arguments like accurate prediction that seem to rescue empiricism.

Comment author: pragmatist 25 June 2012 06:37:19PM *  1 point [-]

I do think one can offer a pragmatic justification for moral reasoning. It won't be exactly parallel to the justification of scientific reasoning because moral and scientific discourse aren't in the same business. Part of the double standard I was talking about involves applying scientific standards of evaluation to determine the success of moral reasoning. This is as much of an error as claiming that relativity is false because the nuclear bomb caused so much suffering. We don't engage in moral reasoning in order to make accurate predictions about sensory experience. We engage in moral reasoning in order to direct action in such a way that our social environment becomes a better place. And I do think we have plenty of historical evidence that our particular system of moral reasoning has contributed to making the world a better place, just as our particular system of scientific reasoning has contributed to our increasing ability to control and predict the behavior of the world.

Now obviously there's a circularity here. Our standards for judging that the world is better now that slavery is illegal and women can vote are internal to the very moral discourse we purport to be evaluating. But this kind of ultimate circularity is unavoidable when we attempt to justify any system of justification as a whole. It's precisely the problem Hume pointed out when he talked about induction. Sure, we can appeal to past success as a justification of our inductive practices, but that justification only works if we are already committed to induction. Furthermore, our belief in the past success of the scientific method is based on historical data collected and interpreted in accord with this method. Somebody who rejects the scientific method wholesale may well say "Why should I believe any of these historical claims you are making?"

A completely transcendental justification, one that would be normative to any possible mind in mindspace, is an impossible goal in both moral and scientific reasoning. Any justification you offer for your justificatory practices is ultimately going to appeal to standards that are internal to those practices. That's something we've all learned to live with in science, but there's still a resistance to this unfortunate fact when it comes to moral discourse.

And to the extent that predictions can be made based on the statement, they seem either false or historically contingent - it's pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.

Our scientific schemes of justification are historically contingent in the same way. There are a number of other communities (extremely religious ones, for instance) that employ a different set of tools for justifying descriptive claims about the universe. Of course, our schemes of justification are better than theirs, as evidenced by their comparative lack of technological and predictive success. By the same token, though, our moral schemes of justification are more successful than those of, say, fundamentalist Islamic societies, as evidenced by our greater degree of moral progress. In both cases, the members of those other societies would disagree that we have done better than them, but that's because they have different (and I would say incorrect) standards of evaluation.

Comment author: TimS 25 June 2012 08:34:45PM 0 points [-]

Yes, there's inherently a certain amount of unsatisfying circularity is everything. But that's a weakness that calls for minimization of circularity.

Empiricism has only one circularly justified position: You can (more or less) trust the input of your senses - which implies some consistency over time. Everything else follows from that. Modern science is better than pytolemic science because it makes better predictions.

By contrast, there's essentially no limit to moral circularity. There's the realism premise: There is a part of the territory called "moral rightness". Then you need a circular argument to show any particular moral premise (these killings are unjustified) is part of moral rightness. And there are multiple independent moral premises. (When killing is wrong does not shed much light on when lying is wrong). It's not even clear that there are a finite number of circularly justified assertions.

So I hold empiricism to the same standard as moral realism, and moral realism seems to come up short. Further, my Minimization of Circular Justification principle is justified by worry about the ease of creating a result simply by making in an axiom. (That is, the Pythagorean Theorem is on a different footing if it is introduced as an axiom of Euclidean geometry rather than a derived result).

Comment author: pragmatist 25 June 2012 09:10:07PM *  1 point [-]

If your principle is actually that circular justification must be minimized, then why aren't you an anti-realist about both scientific and moral claims? Surely that would involve less circular justification than your current position. You wouldn't even have to commit yourself to the one circularly justified position assumed by empiricism.

In any case, scientific reasoning as a whole does not just reduce to the sort of minimal empiricism you describe. For starters, even if you assume that the input of your senses is trustworthy and will continue to remain trustworthy, this does not establish that induction based on the input of your senses is trustworthy. This is a separate assumption you must make. Your minimal empiricism also does not establish that simpler explanations of data tend to be better. This is a third assumption. It also doesn't establish what it means for one explanation to be simpler than another. It doesn't establish that the axioms on which the mathematical and statistical tools of science are based are true. I could go on.

Scientific justification as it's actually practiced in the lab involves a huge suite of tools, and it is not true that the reliability of all these tools can be derived once you accept that you can trust the input of your senses. A person can be an empiricist in your sense while denying the reliability of statistical methods used in science, for instance. To convince them otherwise you will presumably present data that you think establishes the reliability of those methods. But in order for the data to deliver this conclusion, you need to use the same sorts of statistical methods that the skeptic is rejecting. I don't see how your shared empiricism helps in this situation.

Our schemes of justification, both scientific and moral, have developed through a prolonged process of evolutionary and historical accretion. The specific historical reasons underlying the acceptance of particular tools into the toolbox are complex and variegated. It is implausible in either case that we could reconstruct the entire scheme from one or two simple assumptions.