shminux comments on [SEQ RERUN] Is Morality Preference? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?
For me to credit 2) (Morality is true), I would need to know that 2) is a statement that is actually distinguishable in the world from 1). Someone tells me electrons attract other electrons, we do a test, turns out to be false, electrons repel other electrons is a true statement beyond preference. Someone else tells me electrons repel each other because they hate each other. Maybe some day we will learn how to talk to electrons, but until then this is not testable, not tested, and so the people who come down on each side of this question are not talking about truth.
Someone tells me Morality has falsifiable truths in it, where is the experimental test? Name a moral proposition and describe the test to determine its falsehood or truthiness. If the proponent of moral truth did this, I missed it an need to be reminded.
If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true. I am happy to label this difference "scientific" or "fact-based" but of course the danger of labels is they carry freight from their pasts. But however you choose to label it, is their a proponent of the existence of moral truth who can propose a test, or will these proponents accept that "moral truth is more like truths about electrons hating each other and less like truths about electrons repelling each other?"
Note that in discussing the proposition "electrons hate each other" I actually proposed a test of it's truth, but pointed out we did not yet know how to do that test. If we say "we will NEVER know how to do that test, its just dopey" are we saying something scientific? Something testable? I THINK not, I think this is an unscientific claim. But maybe some chain of scientific discovery will put us in a place where we can test statements about what will NEVER be knowable. I personally do not know how to do that now, though. So if I hold an opinion that electrons neither hate nor love each other, I hold it as an opinion, knowing it might be true, it might be false, and/or it might be meaningless in the real world.
So then what of Moral "Truths?" For the moment, at my state of knowledge, they are like statements about the preferences of electrons. Maybe there are moral truths but I don't know how to learn any of them as facts and I am not aware of anyone who has presented a moral truth and a test for its truthiness. Maybe some day...
But in the meantime, everybody who tells me there are moral truths and especially anybody who tells me "X is one of those moral truths" gets tossed in the dustbin labeled "people who don't know the difference between opinion and truth." Is murder wrong, is that a fact? If by murder you mean killing people, you cannot find a successful major civilization that has EVER appeared to believe that. Self-defense, protection of those labeled "innocent," are observed to justify homicide in societies that I am aware of.
But suppose by murder we mean "unjustifiable homicide? Well then you are either in tautology land (murder is defined as killing which is wrong) or you have kicked the can down the road to a discussion of what justifies homicide, and now you need to propose tests of your hypotheses about what justifies homicide.
So even if there is "moral truth," if you can't propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.
You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY's realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam's razor.
Replace "moral truth" with "many worlds", and you get the EY's understanding of QM.
How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism - but instrumentalism is not inherently opposed to physical realism.
Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.
I don't understand. Can you give an example?
A realist finds is perfectly OK to argue which of the many identical maps is "truer" to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone's satisfaction.
I'm objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That's not necessarily the position of the instrumentalist.
Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.
Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don't really care what label you assign to each position.
Respectfully, you were the one invoking technical jargon to do some analytical work.
Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions - accurately describing reality is harder.
You suggest there is unresolvable tension between those positions.
It's a useful model, yes.
The assumption that "accurately describing reality" is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.
Yes, one of them postulates something that cannot be tested. if you are into Occam's razor, that's something that fails it.
Concerns with confusing the map with the territory are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?
The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam's razor!
Occam's razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are "real" or "emulations," only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.
Do not spend a lot of time filling in the details of unreachable lands on your map.
Yep. Also, do not argue which of the many identical maps is better.
If you accept as "true" some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it." I'd be surprised if given those two categories there would be many people who wouldn't elevate the testable statements above the untestable one in "truthiness."
Is this different from having higher confidence in statements for which I have more evidence?
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don't know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the "argument" over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn't. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can't tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There's another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths ("i-can-prove-it"), and "truth-and-i-can't-prove-it."
Generally, this categorization scheme will put most contentious moral assertions into the third category.
Agreed except for your non-conventional use of the word "prove" which is normal restricted to things in the first category.