BobTheBob comments on Conceptual Analysis and Moral Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (456)
Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:
1) Definitions (this is a biggie).
There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.
If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix the value of a variable to restrict the problem. It'd be good to find a real example here, but I'm not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, "Define 'coerced action' to mean any action not physically initiated but made under duress" (or more precise words to the effect). This done, it wouldn't make sense simply to object that my conclusion regarding coerced actions doesn't apply to someone physically pushed from behind - I have stuipulated for the sake of argument I'm not talking about such cases. (in this post, you distinguish stipulation and definition - do you have in mind a distinction I'm glossing over?)
Contrast this to the usual case for conceptual analyses, where it's assumed there's a shared concept ('good', 'right', 'possible', 'knows', etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, "Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken" - or, maybe "Intuitively, this specimen falls under our concept, it lacks...". Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.
I haven't read the Jackson book, so please do correct me if you think I've misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define 'right action' to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there - no defining involved.
You say,
Well, not quite. The point I take it is rather that there simply are 'folk' platitudes which pick-out the meanings of moral terms - this is the starting point. 'Killing people for fun is wrong', 'Helping elderly ladies across the street is right' etc, etc. These are the data (moral intuitions, as usually understood). If this isn't the case, there isn't even a subject to discuss. Either way, it has nothing to do with definitions.
Confusion about definitions is evident in the quote from the post you link to. To re-quote:
Possibly the problem is that 'sound' has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is "an auditory experience in a brain"? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by 'sound' - what he means is subjective and ineffable, something neural events aren't. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I'm not defending this view, just saying that what's offered is not a response but rather a simple begging of the question against it. End of digression.)
2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There's lots of 'em around.
3) In your section The trouble with conceptual analysis, you finally explain,
As explained above, philosophical discussion is not about "which definition to use" -it's about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.
If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky "...advises against reading mainstream philosophy because he thinks it will 'teach very bad habits of thought that will lead people to be unable to do real work.'" The original quote continues, "...assume naturalism! Move on! NEXT!" Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by 'naturalism'? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn't pass in serious discussion.
(Unlike some on this blog, I have not slavishly pored through Eliezer's every post. If there is somewhere a serious discussion of the meaning of 'naturalism' which shows how the usual problems with normative concepts like 'rational' can successfully be navigated, I will withdraw this remark).
Upvoted for thoughtfulness and thoroughness.
I'm using 'definition' in the common sense: "the formal statement of the meaning or significance of a word, phrase, etc." A stipulative definition is a kind of definition "in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context."
A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of 'definition' given above. Normally, a conceptual analysis seeks to arrive at a "formal statement of the meaning or significance of a word, phrase, etc." in terms of necessary and sufficient conditions.
Using my dictionary usage of the term 'define', I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a "formal statement of the meaning or significance of a word, phrase, etc."
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry's arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of 'sound', and what set of necessary and sufficient conditions they might converge upon. That's standard conceptual analysis method.
The reason this process looks silly to us (when using a non-standard example like 'sound') is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let's say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU's definition of 'planet' is more useful than the messy 'folk' definition of 'planet'. Folk intuitions about 'planet' evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.
Vague, intuitively-defined concepts are useful enough for daily conversation in many cases, and wherever they break down due to divergent intuitions and uses, we can just switch to stipulation/tabooing.
Yes. I'm going to argue about facts and anticipations. I've tried to show (a bit) in this post and in this comment about why doing (certain kinds of) conceptual analysis aren't worth it. I'm curious to hear your answers to my many-questions paragraph about the use of conceptual analysis, above.
I've skipped responding to many parts of your comment because I wanted to 'get on the same page' about a few things first. Please re-raise any issues you'd like a response on.
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I'd be interested to know if this seems wrong.
You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.
I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have 'debunked' conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I'm not sure I'm reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
I think that where we differ is on 'intuitive concepts' -what I would want to call just 'concepts'. I don't see that stipulative definitions replace them. Scenario (3), and even the IAU's definition, illustrate this. It is coherent for an astronomer to argue that the IAU's definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU's. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU's definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn't impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith's influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what's obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don't turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
To point people to some additional references on conceptual analysis in philosophy. Audi's (1983, p. 90) "rough characterization" of conceptual analysis is, I think, standard: "Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept."
Or, Ramsey's (1992) take on conceptual analysis: "philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be."
Sandin (2006) gives an example:
This is precisely what Albert and Barry are doing with regard to 'sound'.
Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy 14: 87-106.
Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70.
Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.
Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it's worthwhile. Unfortunately, since that's just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say "taboo your words" and dismiss the whole analysis as useless.
The 'taboo X' reply does seem overused. It is something that is sometimes best to just ignore when you don't think it aids in conveying the point you were making.
When I try that, I tend to get down-votes and replies complaining that I'm not responding to their arguments.
I don't know the specific details of the instances in question. One thing I am sure about, however, is that people can't downvote comments that you don't make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)
I do not think that he is describing conceptual analysis. Starting with a word vs. starting with a set of objects makes all the difference.
In the example he does start with a word, namely 'art', then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.
But he's not analyzing "art", he's analyzing the set of examples, and that is all the difference.
I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn't satisfy the definitions. Would Eliezer:
a) say "sorry despite our intuitions that example isn't art by definition", or
b) conclude that the example was art and there was a problem with the definition?
I'm guessing (b).
He's not trying to define art in accord with on our collective intuitions, he's trying to find the simplest boundary around a list of examples based on an individual's intuitions.
I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.
I would argue he's trying to find the simplest coherent extrapolation of our intuitions.
Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn't "is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?" a much better question?
Focus on what matters, work on actually solving problems instead of trying to just win arguments.
The answer to your question is "it depends on the situation". There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists.
But, regardless, it is simply the case that when Eliezer says
"Perhaps you come to me with a long list of the things that you call "art" and "not art""
and
"It feels intuitive to me to draw this boundary, but I don't know why - can you find me an intension that matches this extension? Can you give me a simple description of this boundary?"
he is not talking about "our intuitions", but a single list provided by a single person.
(It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)
Eliezer's point in that post was that there are more and less natural ways to "carve reality at the joints." That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.
You're tacitly defining philosophy as an endeavor that "doesn't involve facts or anticipations," that is, as something not worth doing in the most literal sense. Such "philosophy" would be a field defined to be useless for guiding one's actions. Anything that is useless for guiding my actions is, well, useless.
The question of what is worth doing is of course profoundly philosophical. You have just assumed an answer.: that what is worth doing is achieving your aims efficiently and what is not worth doing is thinking about whether you have good aims, or which different aims you should have. (And anything that influences your goals will most certainly influence your expected experiences).
We've been over this: either "good aims" and "aims you should have" imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of "guiding my actions."
You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn't something different to philosophy, or something better. It isn't good rationality either. Rationality is as rationality does.
By incoherent I simply mean "I don't know how to interpret the words." So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)
Can you interpret the word "morality is subjective"? How about the the words "morality is not subjective"?
"Morality is subjective": Each person has their own moral sentiments.
"Morality is not subjective": Each person does not have their own moral sentiments. Or there is something more than each person's moral sentiments that is worth calling "moral." <--- But I ask, what is that "something more"?
OK. That is not what "subjective" means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person's opinion. And "objective" therefore means that it is possible for someone to be wrong in their opinion.
I don't claim moral sentiments are correct, but simply that a person's moral sentiment is their moral sentiment. They feel some emotions, and that's all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, "What disadvantage can I anticipate if my emotions are incorrect?"
I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don't bother with things that don't have empirical consequences.
As far as objective value, I simply don't understand what anyone means by the term. And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other."
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?"
To unite the two themes: The ultimate definition would tell me why to care.
Objective truth is what you should believe even if you don't. Objective values are the values you should have even if you have different values.
Where the groundwork is about 90% of the job...
That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn't.
Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't. Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) "should".
"What I should do" =def "what is moral".
Nor everyone does get that , which is why "don't care" is "made to care" by various sanctions.
"Should" for what purpose?
I certainly agree there. The question is whether it is more useful to assign the label "philosophy" to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called "philosophy," only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"
Believing in truth is what rational people do.
Which is good because...?
Correct.
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.
I think that horse has bolted. Inasmuch as you don't care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
It benefits me, because I enjoy helping people. See, I can say, "So what?" in response to "You're wrong." Then you say, "You're still wrong." And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for "being wrong."
Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don't care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed - very unusually - is claimed to not have any effect on such things, "true" and "false" become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can't, I will happily discard them.
What they generally mean is "not subjective". You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I'm not saying non-subjective value is contradictory, just that I don't know what it could mean. To me "value" is a verb, and the noun form is just a nominalization of the verb, like the noun "taste" is a nominalization of the verb "taste." Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn't understand what she meant either.
But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now. What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)
I don't see the force to that argument. "Believe" is a verb and "belief" is a nominalisation. But beliefs can be objectively right or wrong -- if they belong to the appropriate subject area.
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
Why?
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate. But morality is an ultimate because there is no more important value than a moral value.
If there is no personal gain from morality, that doesn't mean you shouldn't be moral. You should be moral by the definition of "moral"and "should". It's an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else - if they understand what I'm getting at here).
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don't pay rent but are still important? I just don't see how that makes sense.
This seems circular.
What if I say, "So what?"
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call "morality", by and large.
It's "Do unto others...", but abstracted a bit, so that we really mean "Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you."
Omega puts you in a room with a big red button. "Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don't press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can't communicate with them. In fact they think they're the only person being given the option of a button, so this problem isn't exactly like Prisoner's dilemma. They don't even know you exist or that their own life is at stake."
"But here's the offer I'm making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they'll be identifying themself.
"Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances."
Given the above scenario, you'll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand's egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
I wasn't talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics - which overlaps with but doesn't equal altruism, same as it overlaps but doesn't equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can't.
This by itself isn't a reason that can force someone to care -- you can't make a rock care about anything, but that's not a problem with your argument. But it's something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don't approach it, i expect more war and other devastation.
Although it usually doesn't.
I think that you version of altruism is a straw man, and that what most people mean by altruism isn't very different from co operation.
Or, as I call it, universalisability.
That argument doesn't have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences -- it can be a self-fulffilling prophecy and not merely passive anticipation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
I would indeed it prefer if other people had certain moral sentiments. I don't think I ever suggested otherwise.
Not quite my point. I'm not talking about what your preferences would be. That would be subjective, personal. I'm talking about what everyone's meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it "universal morality"?
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet's Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of "incoherent". Personally, I find it useful to strike out the word and replace it with something more precise, such a "semantically meaningless", "contradictory", "self-underminng" etc.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here. Since you can't make sense of a person as rational if it's not the case there's anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we're talking about the social sciences, that's another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I'd be open to hear a different view.
I didn't say this - just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here's another stab at it: natural science can in principle tell us everything there is to know about a person's inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, 'I ought to go to class' in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."
"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I take the position that while we may well have evolved with different values, they wouldn't be morality. "Morality" is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."
My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I'm sure it will be.
I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to (unless it means something like "in accordance with consensus"). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you'll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, "So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don't want?"
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
It refers to the stuff that doesn't go away when you stop believing in it.
Note the bold.
English, and all the rest that I know of.
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
If so, I suggest "permanent" as a clearer word choice.
"Changing your aims" is an action, presumably available for guiding with philosophy.