Part of the sequence: No-Nonsense Metaethics. Also see: A Human's Guide to Words.

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert: "Of course it does. What kind of silly question is that? Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds. I don't believe the world changes around when I'm not looking."

Barry: "Wait a minute. If no one hears it, how can it be a sound?"

Albert and Barry are not arguing about facts, but about definitions:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Of course, Albert and Barry could argue back and forth about which definition best fits their intuitions about the meaning of the word. Albert could offer this argument in favor of using his definition of sound:

My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain. 'Sound' means a pattern of vibrations.

Barry might retort:

Imagine some aliens on a distant planet. They haven't evolved any organ that translates vibrations into neural signals, but they still hear sounds inside their own head (as an evolutionary biproduct of some other evolved cognitive mechanism). If these creatures seem metaphysically possible to you, then this shows that our concept of 'sound' is not dependent on patterns of vibrations.

If their debate seems silly to you, I have sad news. A large chunk of moral philosophy looks like this. What Albert and Barry are doing is what philosophers call conceptual analysis.1

The trouble with conceptual analysis

I won't argue that everything that has ever been called 'conceptual analysis' is misguided.2 Instead, I'll give examples of common kinds of conceptual analysis that corrupt discussions of morality and other subjects.

The following paragraph explains succinctly what is wrong with much conceptual analysis:

Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom - such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography. On the other hand, there was metaphysically loaded analysis, in which ontological conclusions were established by holding fixed pieces of folk wisdom - such as attempts to refute general relativity by holding fixed allegedly conceptual truths, such as the idea that motion is intrinsic to moving things, or that there is an objective present.3

Consider even the 'naturalistic' kind of conceptual analysis practiced by Timothy Schroeder in Three Faces of Desire. In private correspondance, I tried to clarify Schroeder's project:

As I see it, [your book] seeks the cleanest reduction of the folk psychological term 'desire' to a natural kind, ala the reduction of the folk chemical term 'water' to H2O. To do this, you employ a naturalism-flavored method of conceptual analysis according to which the best theory of desire is one that is logically consistent, fits the empirical facts, and captures how we use the term and our intuitions about its meaning.

Schroeder confirmed this, and it's not hard to see the motivation for his project. We have this concept 'desire', and we might like to know: "Is there anything in the world similar to what we mean by 'desire'?" Science can answer the "is there anything" part, and intuition (supposedly) can answer the "what we mean by" part.

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use rather than which facts are true and what to anticipate.

In one chapter, Schroeder offers 8 objections4 to a popular conceptual analysis of 'desire' called the 'action-based theory of desire'. Seven of these objections concern our intuitions about the meaning of the word 'desire', including one which asks us to imagine the existence of alien life forms that have desires about the weather but have no dispositions to act to affect the weather. If our intuitions tell us that such creatures are metaphysically possible, goes the argument, then our concept of 'desire' need not be linked to dispositions to act.

Contrast this with a conversation you might have with someone from the Singularity Institute. Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."5

Disputing definitions

Arguing about definitions is not always misguided. Words can be wrong:

When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails."

Likewise, if I give a lecture on correlations between income and subjective well-being and I conclude by saying, "And that, ladies and gentlemen, is my theory of the atom," then you have some reason to object. Nobody else uses the term 'atom' to mean anything remotely like what I've just discussed. If I ever do that, I hope you will argue that my definition of 'morality' is 'wrong' (or unhelpful, or confusing, or something).

Some unfortunate words are used in a wide variety of vague and ambiguous ways.6 Moral terms are among these. As one example, consider some commonly used definitions for 'morally good':

  • that which produces the most pleasure for the most people
  • that which is in accord with the divine will
  • that which adheres to a certain list of rules
  • that which the speaker's intuitions approve of in a state of reflective equilibrium
  • that which the speaker generally approves of
  • that which our culture generally approves of
  • that which our species generally approves of
  • that which we would approve of if we were fully informed and perfectly rational
  • that which adheres to the policies we would vote to enact from behind a veil of ignorance
  • that which does not violate the concept of our personhood
  • that which resists entropy for as long as possible

Often, people can't tell you what they mean by moral terms when you question them. There is little hope of taking a survey to decide what moral terms 'typically mean' or 'really mean'. The problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. "It's just wrong!" the moralist cries, "I don't care what definition you're using right now. It's just wrong: don't do it."

Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the 'tree falling in a forest' debate.

Disputing the definitions of moral terms

So much moral philosophy is consumed by debates over definitions that I will skip to an example from someone you might hope would know better: reductionist Frank Jackson7:

...if Tom tells us that what he means by a right action is one in accord with God's will, rightness according to Tom is being in accord with God's will. If Jack tells us that what he means by a right action is maximizing expected value as measured in hedons, then, for Jack, rightness is maximizing expected value...

But if we wish to address the concerns of our fellows when we discuss the matter - and if we don't, we will not have much of an audience - we had better mean what they mean. We had better, that is, identify our subject via the folk theory of rightness, wrongness, goodness, badness, and so on. We need to identify rightness as the property that satisfies, or near enough satisfies, the folk theory of rightness - and likewise for the other moral properties. It is, thus, folk theory that will be our guide in identifying rightness, goodness, and so on.8

The meanings of moral terms, says Jackson, are given by their place in a network of platitudes ('clauses') from folk moral discourse:

The input clauses of folk morality tell us what kinds of situations described in descriptive, non-moral terms warrant what kinds of description in ethical terms: if an act is an intentional killing, then normally it is wrong; pain is bad; 'I cut, you choose' is a fair procedure; and so on.
The internal role clauses of folk morality articulate the interconnections between matters described in ethical, normative language: courageous people are more likely to do what is right than cowardly people; the best option is the right option; rights impose duties of respect; and so on.
The output clauses of folk morality take us from ethical judgements to facts about motivation and thus behaviour: the judgement that an act is right is normally accompanied by at least some desire to perform the act in question; the realization that an act would be dishonest typically dissuades an agent from performing it; properties that make something good are the properties we typically have some kind of pro-attitude towards, and so on.
Moral functionalism, then, is the view that the meanings of the moral terms are given by their place in this network of input, output, and internal clauses that makes up folk morality.9

And thus, Jackson tosses his lot into the definitions debate. Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms - despite the fact that individual humans, and especially groups of humans, are themselves confused about the meanings of moral terms, and which platitudes of moral discourse should 'matter' in fixing their meaning.

This is a debate about definitions that will never end.

Austere Metaethics vs. Empathic Metaethics

In the next post, we'll dissolve standard moral debates the same way Albert and Barry should have dissolved their debate about sound.

But that is only the first step. It is important to not stop after sweeping away the confusions of mainstream moral philosophy to arrive at mere correct answers. We must stare directly into the heart of the problem and do the impossible.

Consider Alex, who wants to do the 'right' thing. But she doesn't know what 'right' means. Her question is: "How do I do what is right if I don't know exactly what 'right' means?"

The Austere Metaethicist might cross his arms and say:

Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question.

The Empathic Metaethicist takes up a greater burden. The Empathic Metaethicist says to Alex:

You may not know what you mean by 'right.' You haven't asked a coherent question. But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then not only can we tell you what the right thing to do is, but also we can help bring your emotions into alignment with that truth... as you go on to (say) help save the world rather than being filled with pointless existential angst about the universe being made of math.

Austere metaethics is easy. Empathic metaethics is hard. But empathic metaethics is what needs to be done to answer Alex's question, and it's what needs to be done to build a Friendly AI. We'll get there in the next few posts.

Next post: Pluralistic Moral Reductionism

Previous post: What is Metaethics?

Notes

1 Eliezer advises against reading mainstream philosophy because he thinks it will "teach very bad habits of thought that will lead people to be unable to do real work." Conceptual analysis is, I think, exactly that: a very bad habit of thought that renders many people unable to do real work. Also: My thanks to Eliezer for his helpful comments on an early draft of this post.

2 For example: Jackson (1998), p. 28, has a different view of conceptual analysis: "conceptual analysis is the very business of addressing when and whether a story told in one vocabulary is made true by one told in some allegedly more fundamental vocabulary." For an overview of Jackson's kind of conceptual analysis, see here. Also, Alonzo Fyfe reminded me that those who interpret the law must do a kind of conceptual analysis. If a law has been passed declaring that vehicles are not allowed on playgrounds, a judge must figure out whether 'vehicle' includes or excludes rollerskates. More recent papers on conceptual analysis are available at Philpapers. Finally, read Chalmers on verbal disputes.

3 Braddon-Mitchell (2008). A famous example of the first kind lies at the heart of 20th century epistemology: the definition of 'knowledge.' Knowledge had long been defined as 'justified true belief', but then Gettier (1963) presented some hypothetical examples of justified true belief that many of us would intuitively not label as 'knowledge.' Philosophers launched a cottage industry around new definitions of 'knowledge' and new counterexamples to those definitions. Brian Weatherson called this the "analysis of knowledge merry-go-round." Tyrrell McAllister called it the 'Gettier rabbit-hole.'

4 Schroeder (2004), pp. 15-27. Schroeder lists them as 7 objections, but I count his 'trying without desiring' and 'intending without desiring' objections separately.

5 Tabooing one's words is similar to what Chalmers (2009) calls the 'method of elimination'. In an earlier post, Yudkowsky used what Chalmers (2009) calls the 'subscript gambit', except Yudkowsky used underscores instead of subscripts.

6 See also Gallie (1956).

7 Eliezer said that the closest thing to his metaethics from mainstream philosophy is Jackson's 'moral functionalism', but of course moral functionalism is not quite right.

8 Jackson (1998), p. 118.

9 Jackson (1998), pp. 130-131.

References

Braddon-Mitchell (2008). Naturalistic analysis and the a priori. In Braddon-Mitchell & Nola (eds.), Conceptual Analysis and Philosophical Naturalism (pp. 23-43). MIT Press.

Chalmers (2009). Verbal disputes. Unpublished.

Gallie (1956). Essentially contested concepts. Proceedings of the Aristotelean Society, 56: 167-198.

Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.

Jackson (1998). From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford University Press.

Schroeder (2004). Three Faces of Desire. Oxford University Press.

New to LessWrong?

New Comment
481 comments, sorted by Click to highlight new comments since: Today at 5:54 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It almost annoys me, but I feel compelled to vote this up. (I know groundbreaking philosophy is not yet your intended purpose but) I didn't learn anything, I remain worried that the sequence is going to get way too ambitious, and I remain confused about where it's ultimately headed. But the presentation is so good -- clear language, straightforward application of LW wisdom, excellent use of hyperlinks, high skimmability, linked references, flattery of my peer group -- that I feel I have to support the algorithm that generated it.

Most of your comment looks as though it could apply just as well to the most upvoted post on LW ever (edit: second-most-upvoted), and that's good enough for me. :)

There are indeed many LW regulars, and especially SI folk, who won't learn anything from several posts in this series. On the other hand, I think that these points haven't been made clear (about morality) anywhere else. I hope that when people (including LWers) start talking about morality with the usual conceptual-analysis assumptions, you can just link them here and dissolve the problem.

Also, it sounds like you agree with everything in this fairly long post. If so, yours is faint criticism indeed. :)

0FAWS13y
*Second most upvoted post. I was a bit sad that Generalizing From One Example apparently wasn't the top post anymore because I really liked it, and while I also liked Diseased Thinking I just didn't like it quite as much. Nope, not the case, Generalizing From One Example is still at the top. Though I do hope it will eventually be replaced by a post that fully deserves to.
0lukeprog13y
Oops, thanks for the correction. I had to pull from memory because the 'Top' link doesn't work in my browser (Chrome on Mac). It just lists an apparently random selection of posts.
5matt13y
Look for the date range ("Links from") in the sidebar - you want "All Time". Yes, we're fixing the placement of this control in the redesign.
1lukeprog13y
Hey, lookie there!
9lukeprog13y
This comment is for anyone who is confused about where the 'no-nonsense metaethics' sequence is going. First, I had to write a bunch of prerequisites. More prerequisites are upcoming: Intuitions and Philosophy The Neuroscience of Desire The Neuroscience of Pleasure Inferring Our Desires Heading Toward: No-Nonsense Metaethics What is Metaethics? Stage One of the sequence intends to solve or dissolve many of the central problems of mainstream metaethics. Stage one includes this post and a few others to come later. This is my solution to "much of metaethics" promised earlier. The "much of" refers to mainstream metaethics, not to Yudkowskian metaethics. Stage Two of the sequence intends to catch everybody up with the progress on Yudkowskian metaethics that has been made by a few particular brains (mostly at SI) in the last few years but hasn't been written down anywhere yet. Stage Three of the sequence intends to state the open problems of Yudkowskian metaethics as clearly as possible so that rationalists can make incremental progress on them, ala Gowers' Polymath Project or Hilbert's problems. (Unfortunately, problems in metaethics are not as clearly defined as problems in math.)
0[anonymous]13y
Same here.

Looking back at your posts in this sequence so far, it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions." I guess they've been well-sourced, which is worth something. But it seems like we're still waiting on substantial new insights about metaethics, sadly.

I admit it's not very fun for LW regulars, but a few relatively short and simple posts is probably the bare minimum you can get away with while still potentially appealing to bright philosopher or academic types, who will be way more hesitant than your typical contrarian to dismiss an entire field of philosophy as not even wrong. I think Luke's doing a decent job of making his posts just barely accessible/interesting to a very wide audience.

[-][anonymous]13y150

it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions."

No, he said quite a lot more. E.g. why philosophers do that, why it is a bad thing, and what to do about it if we don't want to fall into the same trap. This is all neccessary ground work for his final argument.

If the state of metaethics were such that most people would already agree on these fundamentals then you would have a point, but lukeprog's premise is that it's not.

8lukeprog13y
Seeing as lots of people seemed to benefit even from the 'What is Metaethics' post, I'm not too worried that LW regulars won't learn much from a few of the posts in this series. If you already grok 'Austere Metaethics', then you'll have to wait a few posts for things to get interesting. :)

An interesting phenomenon I've noticed recently is that sometimes words do have short exact definitions that exactly coincide with common usage and intuition. For example, after Gettier scenarios ruined the definition of knowledge as "Justified true belief", philosophers found a new definition:

"A belief in X is knowledge if one would always have that belief whenever X, and never have it whenever not-X".

(where "always" and "never" are defined to be some appropriate significance level)

Now it seems to me that this definition completely nails it. There's not one scenario I can find where this definition doesn't return the correct answer. (EDIT: Wrong! See great-grandchild by Tyrrell McAllister) I now feel very silly for saying things like "'Knowledge' is a fuzzy concept, hard to carve out of thingspace, there's is always going to be some scenario that breaks your definition." It turns out that it had a nice definition all along.

It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not. All you've got to do to write the definition is to find this algorithm.

Yep. Another case in point of the danger of replying, "Tell me how you define X, and I'll tell you the answer" is Parfit in Reason and Persons concluding that whether or not an atom-by-atom duplicate constructed from you is "you" depends on how you define "you". Actually it turns out that there is a definite answer and the answer is knowably yes, because everything Parfit reasoned about "indexical identity" is sheer physical nonsense in a world built on configurations and amplitudes instead of Newtonian billiard balls.

PS: Very Tarskian and Bayesian of them, but are you sure they didn't say, "A belief in X is knowledge if one would never have it whenever not-X"?

5Oscar_Cunningham13y
I'm thinking of Robert Nozick's definition. He states his definition thus: 1. P is true 2. S believes that P 3. If it were the case that (not-P), S would not believe that P 4. If it were the case that P, S would believe that P (I failed to remember condition 1, since 2 & 3 => 1 anyway)

I'm thinking of Robert Nozick's definition. He states his definition thus:

  1. P is true
  2. S believes that P
  3. If it were the case that (not-P), S would not believe that P
  4. If it were the case that P, S would believe that P

There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.

For example, here is a counterexample to Nozick's definition as you present it. Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book's author, a Mr. X, is a congenital liar. He invented almost every claim in the book out of whole cloth, with no regard for the truth of the matter. There was only one exception. There is one matter on which Mr. X is constitutionally compelled to write and to write truthfully: the color of his mother's socks on the day of his birth. At one point in B, Mr. X writes that his mother was wearing blue socks when she gave birth to him. This claim was scrupulously researched and is true. However, there is nothing in the text of B to indicate that Mr. X treated this claim any differently from all ... (read more)

Wham! Okay, I'm reverted to my old position. "Knowledge" is a fuzzy word.

ETA: Or at least a position of uncertainty. I need to research how counterfactuals work.

4lukeprog13y
Yes. An excellent illustration of 'the Gettier rabbit-hole.'
4IlyaShpitser13y
There is an entire chapter in Pearl's Causality book devoted to the rabbit-hole of defining what 'actual cause' means. (Note: the definition given there doesn't work, and there is a substantial literature discussing why and proposing fixes). The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with 'actual cause.' As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
4Tyrrell_McAllister13y
Maybe the concept of "infinity" is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory. But even that isn't an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
1lukeprog13y
Thanks for the Causality heads-up. Can you name an example or two?
4IlyaShpitser13y
Well, as I said, 'actual cause' appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story -- or not. Concepts which have already been formalized include concepts which are both used colloquially in "everyday conversation" and precisely in physics (e.g. weight/mass). One could argue that 'actual cause' is in some sense not a natural concept, but it's still useful in the sense that formalizing the algorithm humans use to decide 'actual cause' problems can be useful for automating certain kinds of legal reasoning. The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
4Tyrrell_McAllister13y
Now, if only someone would give me a hand out of this rabbit-hole before I spend all morning in here ;).

Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of "knowledge", "belief" and "justification" and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call "knowledge" any assignment of high probability to a proposition that turns out to be true.

For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.

The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense "dual" to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is l... (read more)

3Tyrrell_McAllister13y
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I "know" P in some sense? Nonetheless, if I had to develop a coherent concept of "knowledge", I don't think that I'd go with "'knowledge' [is] any assignment of high probability to a proposition that turns out to be true." The crucial question is, who is assigning the probability? If it's my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.) But Getteir problems are usually about some third person's knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they "know" those things that just happen to be true? That is essentially what was going on in my example with Mr. X's book. There, I'm the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother's socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don't know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn't say that I know anything, even though I got this one thing right. ---------------------------------------- ETA:
1Eugine_Nier13y
If you want to set your standard for knowledge this high, I would argue that you're claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
0Tyrrell_McAllister13y
I'm not sure what you mean by a "standard for knowledge". What standard for knowledge do you think that I have proposed? You're talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of "knowledge" dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I'm not sure whether you agree with me there or not.) But you certainly can evaluate someone else's prior. I was trying to explain why "knowledge" becomes problematic in that situation. Do you disagree?
3Oscar_Cunningham13y
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn't fit intuition at all to say I knew that it would rain. This is why the traditional definition is "justified true belief" and that is what Gettier subverts. You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
6lukeprog13y
Here, let me introduce you to my friend Taboo... ;)
3AnlamK13y
That's a very interesting thought. I wonder what leads you to it. With the caveat that I have not read all of this thread: *Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.) *Do you think that the concept of 'knowledge' is inherently vague similar (but not identical) to the way terms like 'tall' and 'bald' are? *Do you suspect that there may be no fact of the matter about what 'knowledge' is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb 'to know' so well?) If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why. It may just be simply that non-technical, common terms like 'vehicle' and 'knowledge' (and of course others like 'table') can't be conceptually analyzed. Also, experimental philosophy could be relevant to this discussion.
4Tyrrell_McAllister13y
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features: * When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.) * Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding. * Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify. * Despite the low probability of success for any given seeker, it's still good that there are a few people out there pursuing a solution. * But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps. * Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem. So, when I called the Gettier problem "dangerous", I just meant that, for most people, it doesn't make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist. Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form "Agent X knows Y". If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th ce
2[anonymous]13y
Both of your Gettier scenarios appear to confirm Nozick's criteria 3 and 4 when the criteria are understood as criteria for a belief-creation strategy to be considered a knowledge-creation strategy applicable to a context outside of the contrived scenario. Taking your scenarios one by one. You have described the strategy of believing everything written in a certain book B. This strategy fails to conform to Nozick's criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy. There are actually two strategies described here, and one of them is followed conditional on events occurring in the implementation of the other. The outer strategy is to flip the coin to decide whether to look at the ball. The inner strategy is to look at the ball. The inner strategy conforms to Nozick's criteria 3 and 4, and therefore (if we apply the criteria) is a knowledge creation strategy. In both cases, the intuitive results you describe appear to conform to Nozick's criteria 3 and 4 understood as described in the first paragraph. Nozick's criteria 3 and 4 (understood as above) appear moreover to play a key role in making sense of our intuitive judgment in both the scenarios. That is, it strikes me as intuitive that the reason we don't count the belief about the socks as knowledge is that it is the fruit of a strategy which, as a general strategy, appears to us to violate criteria 3 and 4 wildly, and only happens to satisfy them in a particular highly contrived context. And similarly, it strikes me as intuitive that we accept the belief about the color as knowledge because we are confident that the method of looking at the ball is a method which strongly satisfies criteria 3 and 4.
0Tyrrell_McAllister13y
The problem with conversations about definitions is that we want our definitions to work perfectly even in the least convenient possible world. So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable. On second thought, don't imagine that. For that is exactly the train of thought that leads to wasting time on thinking about the Getteir problem ;).
0[anonymous]13y
A large part of what was highly contrived was your selection of a particular true, honest, well-researched sentence in a book otherwise filled with lies, precisely because it is so unusual. In order to make it not contrived, we must suppose something like, the book has no lies, the book is all truth. Or we might even need to suppose that every sentence in every book is the truth. In such a world, then the contrivedness of the selection of a true sentence is minimized. So let us imagine ourselves into a world in which every sentence in every book is true. And now we imagine someone who selects a book and believes everything in it. In this world, this strategy, generalized (to pick a random book and believe everything in it) becomes a reliable way to generate true belief. In such a world, I think it would be arguable to call such a strategy a genuine knowledge-creation strategy. In any case, it would depart so radically from your scenario (since in your scenario everything in the book other than that one fact is a lie) that it's not at all clear how it would relate to your scenario.
0Tyrrell_McAllister13y
I'm not sure that I'm seeing your point. Are you saying that * One shouldn't waste time on trying to concoct exceptionless definitions — "exceptionless" in the sense that they fit our intuitions in every single conceivable scenario. In particular, we shouldn't worry about "contrived" scenarios. If a definition works in the non-contrived cases, that's good enough. ... or are you saying that * Nozick's definition really is exceptionless. In every conceivable scenario, and for every single proposition P, every instance of someone "knowing" that P would conform to every one of Nozick's criteria (and conversely). ... or are you saying something else?
2[anonymous]13y
Nozick apparently intended his definition to apply to single beliefs. I applied it to belief-creating strategies (or procedures, methods, mechanisms) rather than to individual beliefs. These strategies are to be evaluated in terms of their overall results if applied widely. Then I noticed that your two Gettier scenarios involved strategies which, respectively, violated and conformed to the definition as I applied it. That's all. I am not drawing conclusions (yet).
3Jiro10y
I'm reminded of the Golden Rule. Since I would like if everyone would execute "if (I am Jiro) then rob", I should execute that as well. It's actually pretty hard to define what it means for a strategy to be exceptionless, and it may be subject to a grue/bleen paradox.
0CuSithBell13y
I thought it sounded contrived at first, but then remembered there are tons of people who pick a book and believe everything they read in it, reaching many false conclusions and a few true ones.
0nshepperd13y
I always thought the "if it were the case" thing was just a way of sweeping the knowledge problem under the rug by restricting counterexamples to "plausible" things that "would happen". It gives the appearance of a definition of knowledge, while simply moving the problem into the "plausibility" box (which you need to use your knowledge to evaluate). I'm not sure it's useful to try to define a binary account of knowledge anyway though. People just don't work like that.
8Will_Sawin13y
A different objection, following Eliezer's PS, is that: Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red. Or more formally: If I know P, then I know (P or Q) for all Q, but: P => Believes (P) does not imply (P v Q) => Believes (P v Q)
0Tyrrell_McAllister13y
This is a more realistic, and hence better, version of the counterexample that I gave in my ETA to this comment.
1Eliezer Yudkowsky13y
I'm genuinely surprised. Condition 4 seems blatantly unnecessary and I had thought analytic philosophers (and Nozick in particular) more competent than that. Am I missing something?
0Tyrrell_McAllister13y
Your hunch is right. Starting on page 179 of Nozick's Philosophical explanations, he address counterexamples like the one that Will Sawin proposed. In response, he gives a modified version of his criteria. As near as I can tell, my first counterexample still breaks it, though.
3lukeprog13y
Yes. In the next post, I'll be naming some definitions for moral terms that should be thrown out, for example those which rest on false assumptions about reality (e.g. "God exists.")
1CuSithBell13y
I don't think the brain usually makes this determination by looking at things that are much like definitions.
-2[anonymous]13y
I think this isn't the usual sense of 'knowledge'. It's too definite. Do I know there's a website called less wrong, for example? Not for sure. It might have ceased to exist while I'm typing this - I have no present confirmation. And of course any confirmation only lasts as long as you look at it. Knowledge is that state where one can make predictions about a subject which are better than chance. Of course this definition has its own flaws, doubtless....

Hey Luke,

Thanks again for your work. You are by far the greatest online teacher I've ever come across (though I've never seen you teach face-to-face). you are concise, clear, direct, empathetic, extremely thorough, tactful and accessible. I am in awe of your abilities. You take the fruit that is at the top of the tree and gently place it into my straining arms! Sorry for the exuberant worship but I really want to express my gratitude for your efforts. They definitely aren't wasted on me.

Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:

1) Definitions (this is a biggie).

There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.

If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix... (read more)

4Amanojack13y
You're tacitly defining philosophy as an endeavor that "doesn't involve facts or anticipations," that is, as something not worth doing in the most literal sense. Such "philosophy" would be a field defined to be useless for guiding one's actions. Anything that is useless for guiding my actions is, well, useless.
-1Peterdjones13y
The question of what is worth doing is of course profoundly philosophical. You have just assumed an answer.: that what is worth doing is achieving your aims efficiently and what is not worth doing is thinking about whether you have good aims, or which different aims you should have. (And anything that influences your goals will most certainly influence your expected experiences).
2Amanojack13y
We've been over this: either "good aims" and "aims you should have" imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of "guiding my actions."
1BobTheBob13y
I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify. Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
2Amanojack13y
As far as objective value, I simply don't understand what anyone means by the term. And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other." As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?" To unite the two themes: The ultimate definition would tell me why to care.
1ArisKatsaris13y
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call "morality", by and large. It's "Do unto others...", but abstracted a bit, so that we really mean "Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you." ---------------------------------------- Omega puts you in a room with a big red button. "Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don't press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can't communicate with them. In fact they think they're the only person being given the option of a button, so this problem isn't exactly like Prisoner's dilemma. They don't even know you exist or that their own life is at stake." "But here's the offer I'm making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they'll be identifying themself. "Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances." ---------------------------------------- Given the above scenario, you'll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
0Amanojack13y
I would indeed it prefer if other people had certain moral sentiments. I don't think I ever suggested otherwise.
0ArisKatsaris13y
Not quite my point. I'm not talking about what your preferences would be. That would be subjective, personal. I'm talking about what everyone's meta-ethical preferences would be, if self-consistent, and abstracted enough. My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility. That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
0Amanojack13y
Then why not just call it "universal morality"?
0ArisKatsaris13y
It's called that too. Are you just objecting as to what we are calling it?
0Amanojack13y
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you're saying in what sense it's supposed to be objective. As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively. And who's going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that's a debate for another day.
0ArisKatsaris13y
I'm getting a bad vibe here, and no longer feel we're having the same conversation "Person or group that decides"? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody "decides", or everyone does. And if they don't reach the same decision, then there's no single objective morality -- but even i so perhaps there's a limited set of coherent metaethical positions, like two or three of them. I think my post was inspired more by TDT solutions to Prisoner's dilemma and Newcomb's box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies. I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
0Amanojack13y
You're right, I think I'm confused about what you were talking about, or I inferred too much. I'm not really following at this point either. One thing, though, is that you're using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God's will, as a way to get along with others, etc. That'll tend to cause some confusion. A good heuristic is, "Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it)."
1ArisKatsaris13y
I'm not. An ethic may say: * I should support same-sex marriage. (SSM-YES) or perhaps: * I should oppose same-sex marraige (SSM-NO) The reason for this position is the meta-ethic: e.g. * Because I should act to increase average utility. (UTIL-AVERAGE) * Because I should act to increase total utility. (UTIL-TOTAL) * Because I should act to increase total amount of freedom (FREEDOM-GOOD) * Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE) * Because I should obey the will of our voters (DEMOCRACY-GOOD) * Because I should do what God commands. (OBEY-GOD). ---------------------------------------- But some metaethical positions are invalid because of false assumptions (e.g. God's existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person's own measuring). So, NO, I don't speak necessarily about Collective Greatest Happiness Utilitarianism. I'm NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a "Greatest happiness utilitarianism") I'm speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have - an Attractor of metaethical positions.
-1Peterdjones13y
That's very contestable. It has frequently argued here that preferences can be inferred from behaviour; it's also been argued that introspection (if that is what you mean by "subjectively") is not a reliable guide to motivation.
0Amanojack13y
This is the whole demonstrated preference thing. I don't buy it myself, but that's a debate for another time. What I mean by subjectively is that I will value one person's life more than another person's life, or I could think that I want that $1,000,000 more than a rich person wants it, but that's just all in my head. To compare utility functions and work from demonstrated preference usually - not always - is a precursor to some kind of authoritarian scheme. I can't say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
-2Peterdjones13y
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand's egoistic ethics). OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
0ArisKatsaris13y
I wasn't talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics - which overlaps with but doesn't equal altruism, same as it overlaps but doesn't equal selfishness. The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can't. This by itself isn't a reason that can force someone to care -- you can't make a rock care about anything, but that's not a problem with your argument. But it's something that leads to different expectations about the world, namely what Amanojack was asking for. In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don't approach it, i expect more war and other devastation.
-2Peterdjones13y
Although it usually doesn't. I think that you version of altruism is a straw man, and that what most people mean by altruism isn't very different from co operation. Or, as I call it, universalisability. That argument doesn't have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences -- it can be a self-fulffilling prophecy and not merely passive anticipation. There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
1BobTheBob13y
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist: -though the apparent tension in being a solipsist who argues gets to the root of the issue. For what it may be worth: I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt. Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise. Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right). Presently you are disagreeing with me about values. To me this says you think the
0Amanojack13y
Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me." My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I'm sure it will be. I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to (unless it means something like "in accordance with consensus"). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you'll call it (I mean them all synonymously). If after engaging in such discourse I am not able to do that, I will eventually want to ask, "So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don't want?"
-2Peterdjones13y
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous. Whose language ? What language? If you think all language is a problem, what do you intend to replace it with? It refers to the stuff that doesn't go away when you stop believing in it.
0Amanojack13y
Note the bold. English, and all the rest that I know of. Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful. If so, I suggest "permanent" as a clearer word choice.
-1Peterdjones13y
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet's Intentional Stance. Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel. To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk. For some value of "incoherent". Personally, I find it useful to strike out the word and replace it with something more precise, such a "semantically meaningless", "contradictory", "self-underminng" etc.
1nshepperd13y
I take the position that while we may well have evolved with different values, they wouldn't be morality. "Morality" is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
-1Peterdjones13y
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
0nshepperd13y
I'm not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call "values". Hence, our values are moral. But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn't about values, it's about life and death and happiness and sadness and many other things beside.
0BobTheBob13y
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here. Since you can't make sense of a person as rational if it's not the case there's anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we're talking about the social sciences, that's another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I'd be open to hear a different view. I didn't say this - just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant. Here's another stab at it: natural science can in principle tell us everything there is to know about a person's inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, 'I ought to go to class' in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
-2Peterdjones13y
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here. But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals." "Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else. I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational). I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
0BobTheBob13y
I expressed myself badly. I agree entirely with this. Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip. And I want to persuade LWers 1) that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and 2) that this is ok - these are still respectable facts, notwithstanding. But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
-2Peterdjones13y
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic. You introduced the word "basic" there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren't basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
0BobTheBob13y
But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
0Peterdjones13y
Oh, that's the philosopher's definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
0BobTheBob13y
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How's this supposed to work, in broad terms?
0[anonymous]13y
But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
0[anonymous]13y
But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
0[anonymous]13y
But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears' (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there's no easy naturalistic solution.
0[anonymous]13y
I expressed myself badly. I agree entirely with this. Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip. And I want to persuade LWers *that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok - these are still respectable facts, notwithstanding. But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
0[anonymous]13y
I expressed myself badly. I agree entirely with this. Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip. And I want to persuade LWers * that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok - these are still respectable facts, notwithstanding. But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
0[anonymous]13y
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist: For what it may be worth: I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt. Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise. Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right). Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an obje
0[anonymous]13y
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist -though the apparent contradiction in being a solipsist who argues gets to the root of the issue. For what it may be worth: I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt. Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise. Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right). Presently you are disagreeing with me about values. To me this says you think
0[anonymous]13y
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist: For what it may be worth: I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt. Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise. Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right). Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an obje
0[anonymous]13y
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist: For what it may be worth, here's what I had: I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt. Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise. Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right). Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an
-1Peterdjones13y
What they generally mean is "not subjective". You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction. As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
0Amanojack13y
I'm not saying non-subjective value is contradictory, just that I don't know what it could mean. To me "value" is a verb, and the noun form is just a nominalization of the verb, like the noun "taste" is a nominalization of the verb "taste." Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn't understand what she meant either. But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now. What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)
-1Peterdjones13y
I don't see the force to that argument. "Believe" is a verb and "belief" is a nominalisation. But beliefs can be objectively right or wrong -- if they belong to the appropriate subject area. It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable. Why? You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate. But morality is an ultimate because there is no more important value than a moral value. If there is no personal gain from morality, that doesn't mean you shouldn't be moral. You should be moral by the definition of "moral"and "should". It's an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.
1Amanojack13y
First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else - if they understand what I'm getting at here). So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don't pay rent but are still important? I just don't see how that makes sense. This seems circular. What if I say, "So what?"
-2Peterdjones13y
How do you know that? If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes. You say that like that's a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular. So it's still true. Not caring is not refutation.
1Amanojack13y
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent. Well, what use is your belief in "objective value"? Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute." I would substitute "useful" and "show people why it is not useful," respectively.
-2Peterdjones13y
I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that? What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition. Then I have a bridge to sell you. And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.
0Amanojack13y
That's what I was responding to. Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know. You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours. But you still haven't defined objective value. Dictionary says, "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased." How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining "value" differently, how?
-1Peterdjones13y
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times. I think moral values are ultimate because I can;t think of a valid argument of the form "I should do because ". Please give an example of a pangalactic value that can be substituted for , Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true". By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased. You haven't remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking. "Values can be defined as broad preferences concerning appropriate courses of action or outcomes"
0Amanojack13y
I missed this: I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.
2NancyLebovitz13y
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way. Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn't intended as a swipe at CR-- people have been known to go a lot farther than that.) People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
-1Peterdjones13y
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent -- showing that you have dispensed with the concept -- is harder. Why didn't it work? You're going to have to paraphrase "Because it wasn't true" or refuse to answer.
0Amanojack13y
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It's impossible to show you've dispensed with any concept, except to show that it isn't useful for what you're doing. That is what I've done. I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
-2Peterdjones13y
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths. Sounding like religion would not render something incomprehensible...but it could easilly provoke an "I don't like it" reaction, which is then dignified with the label "incoherent" or whatever.
0Amanojack13y
I agree, if you mean things like, "If I now believe that she is really a he, I don't want to take 'her' home anymore." Neither can I. I just don't draw the same conclusion. There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else, so it's not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English's fault, and I don't really care which it is, but it would be preferable for something to actually make it across the inferential gap.) EDIT: Oops, posted too soon. So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn't carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn't matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore. At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not. But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings." In summary, value/preferences can e
-2Peterdjones13y
"incoherence" means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn't seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves. So that's why I see complaints of incoherence as being disguised disagreement. If you say so. That doesn't make morality false, meaningless or subjective. It makes you an amoral hedonist. Perhaps not completley, but that sill leaves some things as relatively more objective than others. Then your categories aren't exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking "subjective" to mean "believed by a subject"
0Amanojack13y
The problem isn't that I don't know what it means. The problem is that it means many different things and I don't know which of those you mean by it. I have moral sentiments (empathy, sense of justice, indignation, etc.), so I'm not amoral. And I am not particularly high time-preference, so I'm not a hedonist. If you mean preferences that everyone else shares, sure, but there's no stipulation in my definitions that other people can't share the preferences. In fact, I said, "(though they may be universal or semi-universal)." It'd be a "classic error" to assume you meant one definition of subjective rather than another, when you haven't supplied one yourself? This is about the eight time in this discussion that I've thought that I can't imagine what you think language even is. I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
-1Peterdjones13y
What "moral" means or what "good" means/? No, that isn't the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible. But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure. I don't see the point in stipulating that preferences can't be shared. People who believe they can be just have to find another word. Nothing is proven. I've quoted the dictionary derfinition, and that's what I mean. "existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective). 2. pertaining to or characteristic of an individual; personal; individual: a subjective evaluation. 3. placing excessive emphasis on one's own moods, attitudes, opinions, etc.; unduly egocentric" I think language is public, I think (genuine) disagreements about meaning can be resolved with dictionaries, and I think you shouldn't assume someone is using idiosyncratic definitions unless they give you good reason.
-2Peterdjones13y
Objective truth is what you should believe even if you don't. Objective values are the values you should have even if you have different values. Where the groundwork is about 90% of the job... That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn't. Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't. Even good arguments will fail to work on some people. You should care about morality because it is morality. Morality defines (the ultimate kind of) "should". "What I should do" =def "what is moral". Nor everyone does get that , which is why "don't care" is "made to care" by various sanctions.
0Amanojack13y
"Should" for what purpose? I certainly agree there. The question is whether it is more useful to assign the label "philosophy" to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called "philosophy," only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions. I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"
-2Peterdjones13y
Believing in truth is what rational people do. Which is good because...? Correct. I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work. I think that horse has bolted. Inasmuch as you don't care about truth per se. you have advertised yourself as being irrational.
1Amanojack13y
Winning is what rational people do. We can go back and forth like this. It benefits me, because I enjoy helping people. See, I can say, "So what?" in response to "You're wrong." Then you say, "You're still wrong." And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for "being wrong." Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don't care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed - very unusually - is claimed to not have any effect on such things, "true" and "false" become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can't, I will happily discard them.
-2Peterdjones13y
So you say. I can think of two arguments against that: people acquire true beliefs that aren't immediately useful, and untrue beliefs can be pleasing.
0Amanojack13y
I never said they had to be "immediately useful" (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it's not an objection either.
0Peterdjones13y
You still don't have a good argument to the effect that no one cares about truth per se.
0Amanojack13y
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I'm just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don't/wouldn't care about "truth" in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
0Will_Sawin13y
I think 'usually" is enough qualification, especially considering that he says 'makes a difference' and not 'completely determines"
-2Peterdjones13y
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don't bother with things that don't have empirical consequences.
0[anonymous]13y
I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify. Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
0[anonymous]13y
I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify. Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
0[anonymous]13y
I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but here too it seems you have to do quite a bit of philosophy to get the conclusion that the idea of an objective value judgement is incoherent. Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of the complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about the idea, they really should look into its implications if they want to avoid inadvertent contradictions in their world-views. That means doing some philosophy.
-2Peterdjones13y
You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn't something different to philosophy, or something better. It isn't good rationality either. Rationality is as rationality does.
0Amanojack13y
By incoherent I simply mean "I don't know how to interpret the words." So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)
-2Peterdjones13y
Can you interpret the word "morality is subjective"? How about the the words "morality is not subjective"?
0Amanojack13y
"Morality is subjective": Each person has their own moral sentiments. "Morality is not subjective": Each person does not have their own moral sentiments. Or there is something more than each person's moral sentiments that is worth calling "moral." <--- But I ask, what is that "something more"?
-2Peterdjones13y
OK. That is not what "subjective" means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person's opinion. And "objective" therefore means that it is possible for someone to be wrong in their opinion.
0Amanojack13y
I don't claim moral sentiments are correct, but simply that a person's moral sentiment is their moral sentiment. They feel some emotions, and that's all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, "What disadvantage can I anticipate if my emotions are incorrect?"
-1Peterdjones13y
An emotion, such as a feeling of elation or disgust, is not correct or incorrect per se; but an emotion per se is no basis for a moral sentiment, because moral sentiment has to be about something. You could think gay marriage is wrong because homosexuality disgusts you, or you could feel serial-killing is good because it elates you, but that doesn't mean the conclusions you are coming to are right. It may be a cast iron fact that you have those particular sentiments, but that says nothing about the correctness of their content, any more than any opinion you entertain is automatically correct. ETA The disadvantages you can expect if your emotions are incorrect include being in the wrong whilst feeling you are in the right. Much as if you are entertaining incorrect opinions.
0Amanojack13y
What if I don't care about being wrong (if that's really the only consequence I experience)? What if I just want to win?
-2Peterdjones13y
Then you are, or are likely to be, morally in the wrong. That is of course possible. You can choose to do wrong. But it doesn't constitute any kind of argument. Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn't make "!he world is round" false or meaningless or subjective.
1Amanojack13y
Indeed it is not an argument. Yet I can still say, "So what?" I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I'd care about it. The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.
0[anonymous]13y
That is apparently true in your hypothetical, but it's not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.
0Amanojack13y
You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.
1[anonymous]13y
Actually I am not. I am interpreting "I'm morally wrong" to mean something like, "I made an error of arithmetic in an area where other people depend on me." An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people's reaction to it, nevertheless people will probably react badly to it if you've made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error. Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up. If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth
0Alicorn13y
Nothing's jumping out at me that would seriously impact a group's effectiveness from day to day. I rarely find myself needing to add three and four in particular, and even more rarely in high-stakes situations. What did you have in mind?
3[anonymous]13y
Suppose you think that 3+4=6. I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit - that is, I will give you $6.50 in two seconds. Since you think that 3+4=6, you will jump at this amazing deal.
2Alicorn13y
I find that most people who believe absurd things still have functioning filters for "something is fishy about this". I talked to a person who believed that the world was going to end in 2012, and I offered to give them a dollar right then in exchange for a hundred after the world didn't end, but of course they didn't take it: something was fishy about that. Also, dollars are divisible: someone who believes that 3+4=6 may not believe that 300+400=600.
1[anonymous]13y
If he isn't willing to take your trade, then his alleged belief that the world will end in 2012 is weak at best. In contrast, if you offer to give me $6.50 in exchange for $3 plus $3, then I will take your offer, because I really do believe that 3+3=6. On the matter of divisibility, you are essentially proposing that someone with faulty arithmetic can effectively repair the gap by translating arithmetic problems away from the gap (e.g. by realizing that 3 dollars is 300 pennies and doing arithmetic on the pennies). But in order for them to do this consistently they need to know where the gap is, and if they know that, then it's not a genuine gap. If they realize that their belief that 3+4=6 is faulty, then they don't really believe it. In contrast, if they don't realize that their belief that 3+4=6 is faulty, then they won't consistently translate arithmetic problems away from the gap, and so my task becomes a simple matter of finding areas where they don't translate problems away from the gap, but instead fall in.
2Alicorn13y
Are you saying that you would not be even a little suspicious and inclined to back off if someone said they'd give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on? I'm not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I'm skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.
2[anonymous]13y
In my hypothetical, we can suppose that they are perfectly aware of the existence of the other group. That is, the people who think that 3+4=7 are aware of the people who think that 3+4=6, and vice versa. This will provide them with all the explanation they need for the offer. They will think, "this person is one of those people who think that 3+4=7", and that will explain to them the deal. They will see that the others are trying to profit off them, but they will believe that the attempt will fail, because after all, 3+4=6. As a matter of fact, in my hypothetical the people who believe that 3+4=6 would be just as likely to offer those who believe that 3+4=7 a deal in an attempt to money-pump them. Since they believe that 3+4=6, and are aware of the belief of the others, they might offer the others the following deal: "give us $6.50, and then the next day we will give you $3 and the day after $4." Since they believe that 3+4=6, they will think they are ripping the others off. The thought experiment wasn't intended to be applied to humans as they really are. It was intended to explain humans as they really are by imagining a competition between two kinds of humans - a group that is like us, and a group that is not like us. In the hypothetical scenario, the group like us wins. And I think you completely missed my point, by the way. My point was that arithmetic is not merely a matter of agreement. The truth of a sum is not merely a matter of the majority of humanity agreeing on it. If more than half of humans believed that 3+4=6, this would not make 3+4=6 be true. Arithmetic truth is independent of majority opinion (call the view that arithmetic truth is a matter of consensus within a human group "arithmetic relativism" or "the consensus theory of arithmetic truth"). I argued for this as follows: suppose that half of humanity - nay, more than half - believed that 3+4=6, and a minority believed that 3+4=7. I argued that the minority with the latter belief would have
0Amanojack13y
I agree with this, if that makes any difference.
0Amanojack13y
In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.
0[anonymous]13y
There's no particular connection between morality and arithmetic that I'm aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn't opening up yet another can of worms.
0Amanojack13y
Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.
-2Peterdjones13y
Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong. It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.
0[anonymous]13y
Actually being in the right increases your probability of being judged to be in the right. Yes, the people doing the judging may be wrong, and that is why I made the statement probabilistic. This can be made blindingly obvious with an example. Go to a random country and start gunning down random people in the street. The people there will, with probability so close to 1 as makes no real difference, judge you to be in the wrong, because you of course will be in the wrong. There is a reason why people's judgment is not far off from right. It's the same reason that people's ability to do basic arithmetic when it comes to money is not far off from right. Someone who fails to understand that $10 is twice $5 (or rather the equivalent in the local currency) is going to be robbed blind and his chances of reproduction are slim to none. Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, "nice day", he's a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool. And so, if you go to a random country and start killing people randomly, you will be neutralized by the locals quickly. That's a prediction. Moral thought has predictive power. The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side. And even they did it in secret, as I recall learning, which suggests that powerful as they were, they were not so powerful that they felt safe murdering millions openly. Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society. It is the spontaneous self-regulation of humanity. Its scope is therefore delimited by the absence of a person or organization with overwhelming power. Ev
0AdeleneDawner13y
It sounds to me like you're describing the ability to recognize danger, not evil, there. Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he's at a buffet restaurant, acting normally. Is he still evil? Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?
0[anonymous]13y
It's not either/or. There is no such thing as a bare sense of danger. For example, if you are about to drive your car off a cliff, hopefully you notice in time and stop. In that case, you've sensed danger - but you also sensed the edge of a cliff, probably with your eyes. Or if you are about to drink antifreeze, hopefully you notice in time and stop. In that case, you've sensed danger - but you've also sensed antifreeze, probably with your nose. And so on. It's not either/or. You don't either sense danger or sense some specific thing which happens to be dangerous. Rather, you sense something that happens to be dangerous, and because you know it's dangerous, you sense danger. Chances are higher than average that if he was a criminal lunatic a few days ago, he is still a criminal lunatic today. Obviously not, because if you assume that people fail to perceive something, then it follows that they will behave in a way that is consistent with their failure to perceive it. Similarly, if you fail to notice that the antifreeze that you're drinking is anything other than fruit punch, then you can be expected to drink it just as if it were fruit punch.
0AdeleneDawner13y
My point was that in the shooting case, the perception of danger is sufficient to explain bystanders' behavior. They may perceive other things, but that seems mostly irrelevant. You said: This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he's supposedly still evil.
0[anonymous]13y
People perceive danger because they perceive evil, and evil is dangerous. It is not irrelevant that they perceive a specific thing (such as evil) which is dangerous. Take away the perception of the specific thing, and they have no basis upon which to perceive danger. Only Spiderman directly perceives danger, without perceiving some specific thing which is dangerous. And he's fictional. I was referring to the standard, common ability to recognize evil. I was saying that someone who does not have that ability will be cut out of the gene pool (not definitely - probabilistically, his chances of surviving and reproducing are reduced, and over the generations the effect of this disadvantage compounds). People who fail to recognize that the guy is that same guy from before are not thereby missing the standard human ability to recognize evil.
0Peterdjones13y
Except when the evil guys take over, Then you are in trouble if you oppose them. That doesn't affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated. Why would it be wrong if they do? You theory of morality seems to be in need of another theory of morality to justify it.
0[anonymous]13y
Which is why the effective scope of morality is limited by concentrated power, as I said. I did not equate moral good with instrumental good in the first place. I didn't say it would be wrong. I was talking about making predictions. The usefulness of morality in helping you to predict outcomes is limited by concentrated power. On the contrary, my theory of morality is confirmed by the evidence. You yourself supplied some of the evidence. You pointed out that a concentration of power creates an exception to the prediction that someone who guns down random people will be neutralized. But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.
-2Peterdjones13y
You say: ...but you also say... ..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions. It's a conceptual truth that power interferes with spontaneous self-regulation: but that isn't the point. The point is not that you have a theory that makes predictions, but whether it is a theory of morality. It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.
0[anonymous]13y
"Has to"? I don't remember saying "has to". I remember saying "does", or words to that effect. I was disputing the following claim: This is factually false, considered as a claim about the real world. I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally. The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power. Another constraint is absence of war, though I think this constraint is already implicit in the idea of "spontaneous order" that I am making use of, since war destroys order and prevents order. Because humans organize themselves morally, it is possible to make predictions. However, because of the "no central power" constraint, the scope of those predictions is limited to areas outside the control of the central power. Fortunately for those of us who seek to make predictions on the basis of morality, and also fortunately for people in general, even though the planet is covered with centralized states, much of life still remains largely outside of their control.
-1Peterdjones13y
is that a stipulative definition("morality" =def "spontaneous organisation") or is there some independent standard of morality on which it based? What about non-centralised power? What if one fairly large group -- the gentry, men, citizens, some racial group, have power over another in a decentralised way? And what counts as a society? Can an Athenian slave-owner state that all citizens in their society are equal, and, as for slaves, they are not members of their society. ETA: Actually, it's worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.
0[anonymous]13y
The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people. There is, by the way, such a thing as spontaneous law created by the people even under the state. The book Order Without Law is about this. The "order" it refers to is the spontaneous law - that is, the spontaneous self-government of the people acting privately, without help from the state. This spontaneous self-government ignores and in some cases contradicts the state's official, legislated law. Jim Crow was an example of official state law, and not an example of spontaneous order.
0Peterdjones13y
Plenty of things that happened weren't sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK But law isn't morality. There is such a thing as a laws that apply only to certain people, and which support privilege and the status quo rather than equality and justice.
0[anonymous]13y
Legislation distorts society and the distortion ripples outward. As for the assassination, that was a single act. Order is a statistical regularity. I didn't say it was. I pointed out an example of spontaneous order. It is my thesis that spontaneous order tends to be moral. Much order is spontaneous, so much order is moral, so you can make predictions on the basis of what is moral. That should not be confused with a claim that all order is morality, that all law is morality, which is the claim that you are disputing and a claim I did not make.
-1Peterdjones13y
From it's primordial state of equality...? I can see how a society that starts equal might self organise to stay that way. But I don't think they start equal that often.
-2Peterdjones13y
The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying "there is a perfectly good rational argument that the world is round, but I prefer to be irrational". That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.
0Amanojack13y
Yeah, I said it's not an argument. Yet again I can only ask, "So what?" (And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.) The wordings "affect my winning" and "matter" mean the same thing to me. I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. ) *EDIT: in the sense that you seem to be using it (quite possibly because I don't know what that sense even is!).
-3Peterdjones13y
So being wrong and not caring you are in the wrong is not the same as being right. Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything. But they don't mean the same thing. Morality matters more than anything else by definition. You don't prove anything by adopting an idiosyncratic private language. The question is whether mattering for your actions is morally justifiable.
0Amanojack13y
Yet I still don't care, and by your own admission I suffer not in the slightest from my lack of caring. Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter. Which would be? If you refer me to the dictionary again, I think we're done here.
-2Peterdjones13y
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ? You have not succeeded in showing that winning is the most important thing.
0Amanojack13y
I've never argued (a), I'm still arguing (actually just informing you) that the words "objective morality" are meaningless to me, and I'm still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I'd need a definition!) I'm using the word winning as a synonym for "getting what I want," and I understand the most important thing to mean "what I care about most." And I mean "want" and "care about" in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.
-2Peterdjones13y
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don't-agree-with-it. I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony. What is wrong with dictionary definitions? That doesn't affect anything. You still have no proof for the revised version. Other people out there in the non-existent Objective World? I don't think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.
0Amanojack13y
Zorg said the same thing about his pan-galactic ethics. Did you even read the post we're commenting on? Wait, you want proof that getting what I want is what I care about most? Read what I wrote again. Read.
0nshepperd13y
"Changing your aims" is an action, presumably available for guiding with philosophy.
2lukeprog13y
Upvoted for thoughtfulness and thoroughness. I'm using 'definition' in the common sense: "the formal statement of the meaning or significance of a word, phrase, etc." A stipulative definition is a kind of definition "in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context." A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of 'definition' given above. Normally, a conceptual analysis seeks to arrive at a "formal statement of the meaning or significance of a word, phrase, etc." in terms of necessary and sufficient conditions. Using my dictionary usage of the term 'define', I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a "formal statement of the meaning or significance of a word, phrase, etc." I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality). And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry's arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of 'sound', and what set of necessary and sufficient conditions they might converge upon. That's standard conceptual analysis method. The reason this process looks silly to us (when using a non-standard example like 'sound') is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures,
2BobTheBob13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments. Suppose * we have two people, Albert and Barry * we have one thing, a car, X, of determinate interior volume * we have one sentence, S: "X is a subcompact". * Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowled
2lukeprog13y
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have 'debunked' conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions. But I'm not sure I'm reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
3BobTheBob13y
I think that where we differ is on 'intuitive concepts' -what I would want to call just 'concepts'. I don't see that stipulative definitions replace them. Scenario (3), and even the IAU's definition, illustrate this. It is coherent for an astronomer to argue that the IAU's definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU's. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU's definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements. Philosophy doesn't impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith's influence on economic thinking. I consider though that the clarification is an end in itself. This site proves -what's obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements. Keeping people busy with activities which don't turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
-1Peterdjones13y
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
0[anonymous]13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose * we have two people, Albert and Barry * we have one thing, a car, X, of determinate interior volume * we have one sentence, S: "X is a subcompact". * Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowledge
0[anonymous]13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose * we have two people, Albert and Barry * we have one thing, a car, X, of determinate interior volume * we have one sentence, S: "X is a subcompact". * Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowledge
0[anonymous]13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose * we have two people, Albert and Barry * we have one thing, a car, X, of determinate interior volume * we have one sentence, S: "X is a subcompact". * Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowledge
0[anonymous]13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose * we have two people, Albert and Barry * we have one thing, a car, X, of determinate interior volume * we have one sentence, S: "X is a subcompact". * Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowledge
0[anonymous]13y
You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with what you're saying. Suppose we have two people, Albert and Barry we have one thing, a car, X, of determinate interior volume we have one sentence, S: "X is a subcompact". Albert affirms S, Barry denies S. Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement. Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree. Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2). Argument in scenarios 1 and 2 is futile - there is an acknowledged obje
0lukeprog13y
To point people to some additional references on conceptual analysis in philosophy. Audi's (1983, p. 90) "rough characterization" of conceptual analysis is, I think, standard: "Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept." Or, Ramsey's (1992) take on conceptual analysis: "philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be." Sandin (2006) gives an example: This is precisely what Albert and Barry are doing with regard to 'sound'. ---------------------------------------- Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy 14: 87-106. Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70. Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.
1Eugine_Nier13y
Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it's worthwhile. Unfortunately, since that's just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say "taboo your words" and dismiss the whole analysis as useless.
2wedrifid13y
The 'taboo X' reply does seem overused. It is something that is sometimes best to just ignore when you don't think it aids in conveying the point you were making.
0Eugine_Nier13y
When I try that, I tend to get down-votes and replies complaining that I'm not responding to their arguments.
0wedrifid13y
I don't know the specific details of the instances in question. One thing I am sure about, however, is that people can't downvote comments that you don't make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)
0Will_Sawin13y
I do not think that he is describing conceptual analysis. Starting with a word vs. starting with a set of objects makes all the difference.
0Eugine_Nier13y
In the example he does start with a word, namely 'art', then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.
0Will_Sawin13y
But he's not analyzing "art", he's analyzing the set of examples, and that is all the difference.
0Eugine_Nier13y
I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn't satisfy the definitions. Would Eliezer: a) say "sorry despite our intuitions that example isn't art by definition", or b) conclude that the example was art and there was a problem with the definition? I'm guessing (b).
0Will_Sawin13y
He's not trying to define art in accord with on our collective intuitions, he's trying to find the simplest boundary around a list of examples based on an individual's intuitions. I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.
0Eugine_Nier13y
I would argue he's trying to find the simplest coherent extrapolation of our intuitions.
-1bcoburn13y
Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn't "is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?" a much better question? Focus on what matters, work on actually solving problems instead of trying to just win arguments.
0Will_Sawin13y
The answer to your question is "it depends on the situation". There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists. But, regardless, it is simply the case that when Eliezer says "Perhaps you come to me with a long list of the things that you call "art" and "not art"" and "It feels intuitive to me to draw this boundary, but I don't know why - can you find me an intension that matches this extension? Can you give me a simple description of this boundary?" he is not talking about "our intuitions", but a single list provided by a single person. (It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)
0Amanojack13y
Eliezer's point in that post was that there are more and less natural ways to "carve reality at the joints." That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.

Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom - such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography.

This work is useful. Understanding how people conceptualize and categorize is the starting point for epistemology. If Wittgenstein hadn't asked what qualified as a game, we might still be trying to define everything in terms of necessary and sufficient conditions.

1lukeprog13y
I largely disagree, for these reasons.
0Will_Sawin13y
Wasn't the whole point of Wittgenstein's observation that the question of whether something can be a vehicle without wheels is pretty much useless?

(I'll reiterate some standard points, maybe someone will find them useful.)

The explicit connection you make between figuring out what is right and fixing people's arguments for them is a step in the right direction. Acting in this way is basically the reason it's useful to examine the physical reasons behind your own decisions or beliefs, even though such reasons don't have any normative power (that your brain tends to act a certain way is not a very good argument for acting that way). Understanding these reasons can point you to a step where the reasoning... (read more)

2Nisan13y
I appreciate that this is a theoretical problem. Have you seen any evidence that this or is not a problem in our particular world?
3lessdazed13y
People tend to prefer "just being told the answer", where forcing them to work through problem sets teaches them better. ~~~~~ People dislike articulating answers to rhetorical questions regarding what seems obvious, as this would force them to admit to being surprised by an eventual conclusion, which is a state that can be emotionally uncomfortable, yet the discomfort is linked with embedding it in their memory and it also forces them to face the reality that neighboring beliefs need updating in light of the surprising conclusion because the conclusion was a surprise to them. The above sentence is steeped in my theory behind a phenomenon that you may have better competing theories for, that people dislike rhetorical questions. Note that other theories are obvious but not entirely competitive with mine. META: I have divided my posts with tildes because what seemed in my own mind a minute ago to be two roughly equivalent answers to Nisan's question has unraveled into different qualities of response on my part, this is surprising to me and if there is anything to learn from it I only found it out by trying my fingertips at typing an answer to the question. The tildes also represent that I empathize with anyone downvoting this comment because everything below the tildes is too wordy and low quality; my first response (above the tildes) I think is really insightful. META-META:I've been bemused by my inability to predict how others perceive my comments, but I've recently noticed a pattern: meta comments like this one are likely to get uniform positive or negative response (I'm still typing it out and sticking out my neck [in the safety of pseudonymity] as they are often well received), and I'd appreciate advice on how I could or should have written this post differently for it to be better if it is flawed as I suspect it is. One thing I am trying out for the first time are the META and META-META tags. Is there a better (or more standardized) way to do this?
-1Barry_Cotter13y
The first sentence seems banal, the second interesting. I suspect this is like the take five minutes technique, you thought better because you thought longer. The second paragraph after the tildes seems unnecessary to me.
0lessdazed13y
Thanks.

Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.

If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bi... (read more)

4lukeprog13y
We are trying to be 'less wrong' because human brains are so far from ideal at epistemology and at instrumental rationality ('agency'). But it's a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong. And since we are humans, it helps to retrain our emotions: "Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts."
3Amanojack13y
I'd rather call this "self-help" than "meta-ethics." Why self-help? Because... ...even if my emotions are "wrong," why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it "right", which seems to fall squarely under the purview of self-help. Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label "ethics" that I'd prefer to get away from it as soon as possible.
1[anonymous]13y
As a larger point, separate from the context of lukeprog's particular post: What you assumed above will not always be possible. If models M0...Mn are all misguided, and M(n+1) isn't, driving readers away from misguided models necessarily drives them to one particular conclusion, M(n+1).
0lessdazed13y
I'm not sure what this means. Could you elaborate? What I imagine you to mean seems similar to the sentiment expressed in the first comment to this blog post. That comment seems to me to be so horrifically misguided that I had a strong physiological response to reading it. Basically the commenter thought that since he doesn't experience himself as following rules of formulating thoughts and sentences, he doesn't follow them. This is a confusion of the map and territory that stuck in my memory for some reason, and your comment reminded me of it because you seem to be expressing a very strong faith in the accuracy of how things seem to you. Feel free to just explain yourself without feeling obligated to read a random blog post or telling me how I am misreading you, which would be a side issue.
0Amanojack13y
I think my response to lukeprog above answers this in a way, but it's more just a question of what we mean by "help me decide." I'm not against people helping me be less wrong about the actual content of the territory. I'm just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself. If I am happy because I have plenty of food (in the map), but I actually don't (in the territory), I'd certainly like to be informed of that. It's just that I can handle the transition from happy to "oh shit!" all by myself, thank you very much. In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they're going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.
0lessdazed13y
If you mean that in service of my goal of satisfying my actual desires, there is more of a danger of being misled when getting input from others as to whether my emotions are a good match for reality than when getting input as to whether reality matches my perception of it, I tentatively agree. If you mean that getting input from others as to whether my emotions are a good match for reality has a greater cost than benefit, I disagree assuming basic advice filters similar to those used when getting input as to whether reality matches my perception of it. As per above, there will all else equal be a lower expected payoff for me getting advice in this area, even though the advantages are similar. If you mean that there is a fundamental difference in kind between matching perception to reality and emotions to perceptions that makes getting input an act that is beneficial in the former case and corrosive in the latter, I disagree. I have low confidence regarding what emotions are most appropriate for various crises and non-crises, and suspect what I think of as ideal are at best local peaks with little chance of being optimal. In addition, what I think of as optimal emotional responses are likely to be too resistant to exceptions. E.g., if one is trapped in a mine shaft the emotional response suitable for typical cases of being trapped is likely to consume too much oxygen. I'm generally open to ideas regarding what my emotions should be in different situations, and how I can act to change my emotions.

A lot of the issue with things like conceptual analysis, I think, is that people do them badly, and then others have to step in and waste even more words to correct them. If the worst three quarters of philosophers suddenly stopped philosophizing, the field would probably progress faster.

3lessdazed13y
Agreed as literally stated, and also agree with your implication: this is especially true for philosophy in addition to other fields in which this is also true. "other fields in which this is also true" is intentionally ambiguous, half implying that this is basically true for all other fields and half implying it's only true for a small subset, as I'm undecided as to which is the case.
3MBlume13y
net negative productivity programmer

As one example, consider some commonly used definitions for 'morally good':

  • that which produces the most pleasure for the most people
  • that which is in accord with the divine will
  • ...

Those aren't definitions of 'morally good'. They are theories of the morally good. I seriously doubt that there are any real philosophers that are confused about the distinction.

1lukeprog13y
Right, but part of each of these theories is that using one set of definitions for moral terms is better than using another set of definitions, often for reasons similar to the network-style conceptual analysis proposed by Jackson.
1Perplexed13y
If you are saying that meta-ethical definitions can never be perfectly neutral wrt a choice between ethical theories, then I have to agree. Every ethical theory comes dressed in a flattering meta-ethical evening gown that reveals the nice stuff but craftily hides the ugly bits. But that doesn't mean that we shouldn't at least strive for neutrality. Personally, I would prefer to have the definition of "morally good" include consequential goods, deontological goods, and virtue goods. If the correct moral theory can explain this trinity in terms of one fundamental kind of good, plus two derived goods, well that is great. But that work is part of normative ethics, not meta-ethics. And it certainly is not accomplished by imposing a definition.
0lukeprog13y
I'm doing a better job of explaining myself over here.
-2Peterdjones13y
All of those already include the pre-theoretic notion of "good".
0Perplexed13y
Correct. Which is why I think it is a mistake if they are not accounted for in the post-theoretic notion.
-2Peterdjones13y
But then confusion about definitions is actually confusion about theories.
-2Peterdjones13y
The idea that people by default have no idea at all what moral language is hard to credit, whether claimed of people in general, or claimed by individuals of themselves. Everyone, after all, is brought up from an early age with a great deal of moral exhortation, to do Good things and refrain from Naughty things. Perhaps not everybody gets very far along the Kohlberg scale, but no one is starting from scratch. People may not be able to articulate a clear definition, or not the kind of definition one would expect from a theory, but that does not mean one needs a theory of metaethics to give a meaning to "moral".
0Perplexed13y
No. One only needs a theory of metaethics to prevent philosophers from giving it a disastrously wrong meaning.
-2Peterdjones13y
exactly what I wanted to say!

Eliezer advises against reading mainstream philosophy because he thinks it will "teach very bad habits of thought that will lead people to be unable to do real work

Alternative hypothesis: it will teach good habits of thought that will allow people to recognise bad amateur philosophy.

2dxu9y
It is unlikely that you will gain these "good habits of thought" allowing you to recognize "bad amateur philosophy" from reading mainstream philosophy when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".
1wedrifid9y
No, much of it is bad professional philosophy. It's like bad amateur philosophy except that students are forced to pretend it matters.
1TheAncientGeek9y
No. I'm calling the Sequences bad amateur philosophy.
2dxu9y
If that's the case, I'd like to hear your reasoning behind this statement.
3TheAncientGeek9y
1. A significant number of postings don't argue towards a discernible point. 2. A significant number of postings don't argue their point cogently. 3. Lack of awareness of standard counterarguments, and alternative theories. 4. Lack of appropriate response to objections. None of this has anything to do with which answers are right or wrong. It is a form of the fallacy of grey to argue that since no philosophy comes up with definite answers, then it's all equally a failure. Philosophy isn't trying to be science, so it isn't broken science. 1. A quick way of confirming this point might be to attempt to summarize the Less Wrong theory of ethics. 2. Particularly the ones written as dialogues. I share Massimo Pigliuccis frustration 3 and 4. There's an example here. A poster makes a very pertinent objection tithe main post. No one responds, and the main post is to this day bandied around as establishing the point. Things don't work like that. If someone returns your serve, you're supposed to hit back, not walk off the court and claim the prize. A knowledge of philosophy doesn't give you a basis of facts to build on,but it does load your brain with a network of argument and counterargument, and can prevent you wasting time by mounting elaborate defences of claims to which there are well known objections.
3Vaniver9y
It seems to me that there are two views of philosophy that are useful here: one of them I'll term perspective, or a particular way of viewing the world, and the other one is comparative perspectives. That term is deliberately modeled after comparative religion because I think the analogy is useful; typically, one develops the practice of one's own religion and the understanding of other religions. It seems to me that the Sequences are a useful guide for crystallizing the 'LW perspective' in readers, but are not a useful guide for placing the 'LW perspective' in the history of perspectives. (For that, one's better off turning to lukeprog, who has a formal education in philosophy.) Perhaps there are standard criticisms other perspectives make of this perspective, but whether or not that matters depends on whether you want to argue about this perspective or inhabit this perspective. If the latter, a criticism is not particularly interesting, but a patch is interesting. That is to say, I think comparative perspectives (i.e. studying philosophy formally) has value, but it's a narrow kind of value and like most things the labor involved should be specialized. I also think that the best guide to philosophy X for laymen and the best guide to philosophy X for philosophers will look different, and Eliezer's choice to optimize for laymen was wise overall.

Most of the content in the sequences isn't new as such, but it did draw from many different sources, most of which were largely confined to academia. In synthesis, the product is pretty original. To the best of my knowledge, the LessWrong perspective/community has antecedents but not an obvious historical counterpart.

In that light, I'd expect the catalyzing agent for such a perspective to be the least effective such agent that could successfully accomplish the task. (Or: to be randomly selected from the space of all possible effective agents, which is quite similar in practice.) We are the tool-users not because hominids are optimized for tool use, but because we were the first ones to do so with enough skill to experience a takeoff of civilization. So it's pretty reasonable to expect the sequences to be a little wibbly.

To continue your religious metaphor, Paul wrote in atrocious Greek, had confusingly strong opinions about manbeds, and made it in to scripture because he was instrumental in building the early church communities. Augustine persuasively developed a coherent metaphysic for the religion that reconciled it with the mainstream Neoplatonism of the day, helping to cl... (read more)

0Vaniver9y
I wish I had more than one upvote to give this comment; entirely agreed.
2Toggle9y
Thank you! The compliment works just as well.
-2TheAncientGeek9y
..and its not too iimportant what the community is crystallized around? Believing in things you can't justify or explain is something that an atheist community can safely borrow from religion?
4Vaniver9y
Of course it's important. What gives you another impression? It's not clear to me where you're getting this. To be clear, I think that the LW perspective has different definitions of "believe," "justify," and "explain" from traditional philosophy, but I don't think that it gets its versions from religion. I also think that atheism is a consequence of LW's epistemology, not a foundation of it. (As a side note, the parts of religion that don't collapse when brought into a robust epistemology are solid enough to build on, and there's little to be gained by turning your nose up at their source.) In this particular conversation, the religion analogy is used primarily in a social and historical sense. People believe things; people communicate and coordinate on beliefs. How has that communication and coordination happened in the past, and what can we learn from that?
-2TheAncientGeek9y
We can learn that "all for the cause, whatever it is" is a failure of rationality. I think the LW perspective has the same definitions...but possibly different theories from the various theories of traditional philosophy. (It also looks like LW has a different definition if "definition", which really confuses things) Religious epistemology - dogmatism+vagueness - is just the problem
3Vaniver9y
Entirely agreed. I don't see the dogmatism you're noticing--yes, Eliezer has strong opinions on issues I don't think he should have strong opinions on, but those strong opinions are only weakly transmitted to others and you'll find robust disagreement. Similarly, the vagueness I've noticed tends to be necessary vagueness, in the sense of "X is an open problem, but here's my best guess at how X will be solved. You'll notice that it's fuzzy here, there, and there, which is why I think the problem is still open."
-9TheAncientGeek9y
0TheAncientGeek9y
"Crystalising" you team clarifying, or defending. Communicating the content of a claim is of llimited use, unless you can make it persuasive. That in turn, requires defending it against alternatives. So the function you are trying to separate are actually very interconnected. (Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level)
3Vaniver9y
I mean clarifying. I use that term because some people look at the Sequences and say "but that's all just common sense!". In some ways it is, but in other ways a major contribution of the Sequences is to not just let people recognize that sort of common sense but reproduce it. I understand that clarification and defense are closely linked, and am trying to separate intentionality more than I am methodology. I consider 'stoicism' to be a 'philosophy,' but I notice that Stoics are not particularly interested in debating the finer points of abstractions, and might even consider doing so dangerous to their serenity relative to other activities. A particularly Stoic activity is negative visualization- the practice of imagining something precious being destroyed, to lessen one's anxiety about its impermanence through deliberate acceptance, and to increase one's appreciation of its continued existence. One could see this as an unconnected claim put forth by Stoics that can be evaluated on its own merits (we could give a grant to a psychologist to test whether or not negative visualization actually works), but it seems to me that it is obvious that in the universe where negative visualization works, Stoics would notice and either copy the practice from its inventors or invent it themselves, because Stoicism is fundamentally about reducing anxiety and achieving serenity, and this seems amenable to a holistic characterization. (The psychologist might find that negative visualization works differently for Stoics than non-Stoics, and might actually only be a good idea for Stoics.)
1TheAncientGeek9y
Your example of "a philosophy" is pretty much a religion. by current standard. By philosophy I meant the sort of thing typified by current anglophone philosophy.
4Toggle9y
That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter ('rationalists win' + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.
-3TheAncientGeek9y
I'm delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!
4Nornagest9y
Clippy is pretty speculative, but analogies to Newcomb's problem come up in real-world decision-making all the time; it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior (read: people that know each other), much like how the Prisoner's Dilemma is a dramatization of a certain type of coordination problem. It doesn't have to literally involve near-omniscient aliens handing out money in opaque boxes.
0Lumifer9y
Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb's problem.
4MarkusRamikin9y
Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don't involve Omega, but they also don't (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.
0Lumifer9y
That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia, and that's clearly not the situation of Joe and Kate.
3dxu9y
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?
-4Lumifer9y
I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.
3dxu9y
Reflective inconsistency isn't that hard to grasp, though, even for a human. All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent. If that's the case, then there is no qualitative difference between Newcomb's Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don't you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard--and unless I'm misunderstanding something, that means avoiding reflective inconsistency.
0Lumifer9y
I don't consider them equivalent.
2dxu9y
Fair enough. I'm not exactly qualified to talk about this sort of thing, but I'd still be interested to hear why you think the answers to these two ought to be different. (There's no guarantee I'll reply, though!)
-6Lumifer9y
2TheOtherDave9y
What, on your view, is the argument for not two-boxing with an omniscient Omega? How does that argument change with a non-omniscient but skilled predictor?
0Lumifer9y
If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
3dxu9y
You are facing a modified version of Newcomb's Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
-6Lumifer9y
2wedrifid9y
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
-9Lumifer9y
1Nornagest9y
Think of the situation in the last round of an iterated Prisoner's Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren't strictly Newcomblike, but they're closely related; there's a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don't model you as a CooperateBot) and defect otherwise. If you know you're dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.
1Vaniver9y
You may note several posts ago that I noticed the word 'philosophy' was not useful and tried to substitute it with other, less loaded, terms in order to more effectively communicate my meaning. This is a specific useful technique with multiple subcomponents (noticing that it's necessary, deciding how to separate the concepts, deciding how to communicate the separation), that I've gotten better at because of time spent here. Yes, comparative perspectives is much more about claims and much less about holism than any individual perspective- but for a person, the point of comparing perspectives is to choose one whereas for a professional arguer the point of comparing perspectives is to be able to argue more winningly, and so the approaches and paths they take will look rather different.
-2TheAncientGeek9y
Professionals are quite capable of passionately backing a particular view. If amateurs are uninterested in arguing - your claim, not mine - that means they are uninterested in truth seeking. People who adopt beliefs they can't defend are adopting beliefs as clothing
2dxu9y
1 and 2 seem to mostly be objections to the presentation of the material as opposed to the content. Most of these criticisms are ones I agree with, but given the context (the Sequences being "bad amateur philosophy"), they seem largely tangential to the overall point. There are plenty of horrible math books out there; would you use that fact to claim that math itself is flawed? As for 3 and 4, I note that the link you provided is not an objection per se, but more of an expression of surprise: "What, doesn't everyone know this?" Note also that this comment actually has a reply attached to it, which rather undermines your point that "people on LW don't respond to criticisms". I'm sure you have other examples of objections being ignored, but in my opinion, this one probably wasn't the best example to use if you were trying to make a point.
-2TheAncientGeek9y
Not in the sense that I don't like the font. Lack of justification or point are serious issues. EDIT I have already said that this isn't about that is right .or wrong. I can find out what math is from good books. If the Sequences are putting forward original ideas, I have nowhere else to go,. Of course, in many cases, I can't tell whether they are, And the author can't tell me whether his philosophy is new because he doesn't know the old philosophy.
[-][anonymous]9y00

The dichotomy between the Austere and the Empathic meta ethicist may well be false. I'd like to see more support for it, and specifically for the implicit claim that a question cannot be coherent unless we fully understand all its terms. Answering that claim may involve asking whether we can refer to something with a term when we do not fully understand what we are referring to (although the answer to that is surely "yes!").

I think that some basic conceptual analysis can be important for clarifying discussion given that many of these words are used and will continue to be used. For example, it is useful to know that "justified true belief" is a useful first approximation of what is meant by knowledge, but that the situation is actually slightly more complicated by that.

On the other hand, I don't expect that this will work for all concepts. Some concepts are extremely slippery and will lack enough of a shared meaning that we can provide a single definition. In these cases, we can simply point out the key features that these cases tend to have in common.

[-][anonymous]13y00

Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:

1) Definitions (this is a biggie).

There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.

If in the course of philosophical debate, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix the... (read more)

What happen to philosophers like Hume who tried to avoid "mere disputes of words?" Seriously, as much as many 20th century philosophers liked Hume, especially the first book of the Treatise (e.g., the positivists), why didn't they pick up on that?

(I seem to remember some flippant remark making fun of philosophers for these disputes in the Treatise but google finds me nothing)

5Tyrrell_McAllister13y
Getting hung up on the meanings of words is an attractor. Even if your community starts out consciously trying to avoid it, it's very easy to get sucked back in. Here is a likely sequence of steps. 1. All this talk about words is silly! We care about actually implementing our will in the real world! 2. Of course, we want to implement our will precisely. We need to know how things are precisely and how we want them to be precisely, so that we can figure out what we should do precisely. 3. So, we want to formulate all this precise knowledge and to perform precise actions. But we're a community, so we're going to have to communicate all this knowledge and these plans among ourselves. Thus, we're going to need a correspondingly precise language to convey all these precise things to one another. 4. Okay, so let's get started on that precise language. Take the word A. What, precisely, does it mean? Well, what precisely are the states of affairs such that the word A applies? Wait, what precisely is a "state of affairs"? . . . And down the rabbit-hole you go.

I would like to see some enlargement on the concept of definition. It is usually treated as a simple concept: A means B or C or D; which one depending on Z. But when we try to pin down C for instance, we find that it has a lot of baggage - emotional, framing, stylistic etc. So does B and D. And in no case is the baggage of any of them the same as the baggage of A. None of - defining terms or tabooing words or coining new words - really works all that well in the real world, although they of course help. Do you see a way around this fuzziness?

Another 'morally good' definition for your list is 'that which will not make the doer feel guilty or shameful in future'. It is no better than the others but quite different.

1fubarobfusco13y
I don't like this one. It implies that successful suicide is always morally good.

I don't think you're arguing against conceptual analysis, instead you want to treat a particular conceptual analysis (reductive physicalism) as gospel. What is the claim that there are two definitions of sound that we can confuse, the acoustic vibrations in the air and the auditory experience in a brain, if it's not a reductive conceptual analysis of the concept of sound?

3lukeprog13y
Like I said at the beginning:

The definition of "right action" is the kind of action you should do.

You don't need to know what "should" means, you just need to do what you should do and not do what you shouldn't do.

One should be able to cash out arguments about the "definition" of "right" as arguments about the actual nature of shouldness.

6lukeprog13y
Defining 'right' in terms of 'should' gets us nowhere; it just punts to another symbol. Thus, I don't yet know what you're trying to say in this comment. Could you taboo 'should' for me?
2Will_Sawin13y
Only through the use of koans. Consider the dialog in: http://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles Could you explain what "If A, then B" means, tabooing "if/then","therefore",etc.? Here is another way: If a rational agent becomes aware that the statement "I should do X" is true, then it will either proceed to do X or proceed to realize that it cannot do X (at least for now). ETA: Here is a simple Python function (I think I coded it correctly): def square (x): y=x*x return y "return" is not just another symbol. It is not a gensym. It is functional. The act of returning and producing an output is completely separate from and non-reducible-to everything else that a subroutine can do. Rational agents use "should" the same way this subroutine uses "return". It controls their output.
3Vladimir_Nesov13y
But better understanding of what "should" means helps, although it's true that you should do what you should even if you have no idea what "should" means.
6Amanojack13y
How do I go about interpreting that statement if I have no idea what "should" means?
4Vladimir_Nesov13y
Use your shouldness-detector, even if it has no user-serviceable parts within. Shouldness-detector is that white sparkly sphere over there.
1lessdazed13y
I think it means something analogous to "you can staple even if you have no idea what "kramdrukker" means". (I don't speak Afrikaans, but that's what a translator program just said is "stapler" in Africaans.) ~~~~~~~ I think "should" is a special case of where a "can" sentence gets infected by the sentence's object (because the object is "should") to become a "should" sentence. "You can hammer the nail." But should I? It's unclear. "You can eat the fish." But should I? It's unclear. "You can do what you should do." But should I? Yes - I definitely should, just because I can. So, "You can do what you should do" is equivalent to"You should do what you should do". In other words, I interpret the statement by Vladmir to be an instance of what we can generally say about "can" statements, of which "should" happens to be a special case in which there is infection from "should" to "can" such that it is more natural in English to not write "can" at all. This allows us to go from uncontroversial "can" statements to "should" statements, all without learning Africaans! This feel like novel reasoning by my part (i.e. the whole "can" being infected bit) as to how Vladmir's statement is true, and I'd appreciate comments or a similarly reasoned source I might be partially remembering and repeating.
2[anonymous]13y
If these are equivalent, then the truth of the second statement should entail the truth of the first. But "You should do what you should do" is ostensibly a tautology, while "You can do what you should do" is not, and could be false. One out you might want to take is to declare "S should X" only meaningful when ability and circumstance allow S to do X; when "S can X." But then you just have two clear tautologies, and declaring them equivalent is not suggestive of much at all.
0lessdazed13y
Decisive points. As you have shown them to not be equivalent, I would have done better to say: But if the latter statement is truly a tautology, that obviously doesn't help. If I then add your second edit, that by "should" I mean "provided one is able to", I am at least less wrong...but can my argument avoid being wrong only by being vacuous? I think so.
-1Will_Sawin13y
If you don't know what "should" means, how do you decide what to do? This is another instance in which you can't argue morality into a rock.
0Will_Sawin13y
If knowing what "should" means helped something, then knowledge of a definition could lead to real actionable information. This seems, on the face of it, absurd. I think either: "XYZ things are things that maximize utility" or: "XYZ things are things that you should do" can count as a definition of XYZ, but not both, just as: "ABC things are red things" pr "ABC things are round things" can count as a definition of ABC things, but not both. (Since if you knew both, then you would learn that red things are round and round things are red.)

I was under the impression that the example of an unobserved tree falling in the woods is taken as a naturalized version of Schrodinger's Cat experiment. So the question of whether it makes a sound is not necessarily about the definition of a sound.

0lukeprog13y
Nope.
0Dan_Moore13y
The Wikipedia article you linked has a See Also: Schrodinger's Cat link.