From Costanza's original thread (entire text):
This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Of course it's about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it's not a good definition.
Let's define "yummy". I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of "yummy" that entails that what you find yummy is not in fact yummy, I've created a definition that is useless for dealing with the reality of what you find yummy.
From my inside view of yummy, of course you're just wrong if you think root beer isn't yummy - I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other.
Discussion with the inside view: "Let's get root beer." "What? Root beer sucks!" "Root beer is yummy!" "Is not!" "Is too!"
Discussion with the outside view: "Let's get root beer." "What? Root beer sucks!" "You don't find root beer yummy?" "No. Blech." "OK, I'm getting a root beer." "And I pick pepsi."
If you've tied yourself up in conceptual knots, and concluded that root beer really isn't yummy for me, even though my yummy detector fires whenever I have root beer, you're just confused and not talking about reality.
This is the problem. You've divorced your definition from the relevant part of reality - the speaker's terminal values, and somehow twisted it around to where what he *should" do is at odds with his terminal values. This definition is not useful for discussing moral issues with the given speaker. He's a machine that maximizes his terminal values. If his algorithms are functioning properly, he'll disregard your definition as irrelevant to achieving his ends. Whether from the inside view of morality for that speaker, or his outside view, you're just wrong. And you're also wrong from any outside view that accurately models what terminal values people actually have.
Rational discussions of morality start with the observation that people have differing terminal values. Our terminal values are our ultimate biases. Recognizing that my biases are mine, and not identical to yours, is the first step away from the usual useless babble in moral philosophy.