army1987 comments on The Trouble With "Good" - Less Wrong

83 Post author: Yvain 17 April 2009 02:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (131)

You are viewing a single comment's thread.

Comment author: thomblake 17 April 2009 02:14:55AM *  3 points [-]

Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.

"Should I eat this apple?" Becomes simply "how do I feel about eating this apple" (or otherwise it's simply meaningless). But really there are considerations that go into the answer other than mere feelings (for example, is the apple poisonous?).

Because utilitarianism has a theory of right action and a theory of value, I don't think it's compatible with emotivism. But I haven't read much in the literature detailing this particular question, as I don't read much currently about utilitarianism.

Comment author: Yvain 17 April 2009 02:25:08AM *  9 points [-]

Well, what's interesting about that comment is that our beliefs about our own justifications and actions are usually educated guesses and not privileged knowledge. Or consider Eliezer's post about the guy who said he didn't respect Eliezer's ideas because Eliezer didn't have a Ph.D, and then when Eliezer found a Ph.D who agreed with him, the guy didn't believe him either.

My guess would be that we see the apple is poisonous and "downvote" it heavily. Then someone asks what we think of the apple, we note the downvotes, and we say it's bad. Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us. Which is probably that it's poisonous.

See also: footnote 1

Comment author: pjeby 17 April 2009 02:41:09AM 16 points [-]

Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us.

Don't blame the unconscious. It only makes up explanations when you ask for them.

My first lesson in this was when I was 17 years old, at my first programming job in the USA. I hadn't been working there very long, maybe only a week or two, and I said something or other that I hadn't thought through -- essentially making up an explanation.

The boss reprimanded me, and told me of something he called "Counter man syndrome", wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their own bullshit.

From then on, I never believed my own made-up explanations... at least not in the field of computers. Instead, I considered them as hypotheses.

So, it's not only a learnable skill, it can be learned quickly, at least by 17 year-old. ;-)

Comment author: vizikahn 17 April 2009 10:00:59AM 3 points [-]

When I had a job behind a counter, one of the rules was: "We don't sell 'I don't know'". We were encouraged to look things up as hard as possible, but it's easy to see how this turns into making things up. I'm going to use the term Counter man syndrome from now on.

Comment author: Yvain 17 April 2009 03:31:28AM 0 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.

Comment author: pjeby 17 April 2009 06:16:37AM 3 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

I'm pointing out that there is actually no difference between the two. Your "explainer" (I call it the Speculator, myself), just makes stuff up with no concern for the truth. All it cares about are plausibility and good self-image reflection.

I don't see the Speculator as entirely unconscious, though. In fact, most of us tend to identify with the Speculator, and view its thoughts as our own. Or I suppose, you might say that the Speculator is an tool that we can choose to think with... and we tend to reach for it by default.

I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.

Sometimes I refer to the other-than-conscious, or to non-conscious processes. But finer distinctions are useful at times, so I also refer to the Savant (non-verbal, sensory-oriented, single-stepping, abstraction/negation-free) and the Speculator (verbal, projecting, abstracting, etc.)

I suppose it's open to question whether the Speculator is really "other-than-conscious", in that it sounds like a conscious entity, and we consciously tend to identify with it, in the absence of e.g. meditative or contemplative training.

Comment author: SoullessAutomaton 17 April 2009 11:51:09AM 0 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

What makes you think the mental systems to construct either explanation would be different? Especially given the research showing that we have dedicated mental systems devoted to rationalizing observed events.

Comment author: orthonormal 17 April 2009 04:45:03PM 3 points [-]

Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.

Right. I think that most people hold the belief that their system of valuations is internally consistent (i.e. that you can't have two different descriptions of the same thing that are both complete, accurate, and assign different valences), which requires them (in theory) to confront moral arguments.

I think of basic moral valuations as being one other facet of human perception: the complicated process by which we interpret sensory data to get a mental representation of objects, persons, actions, etc. It seems that one of the things our mental representation often includes is a little XML tag indicating moral valuation.

The general problem is that these don't generally form a coherent system, which is why intelligent people throughout the ages have been trying to convince themselves to bite certain bullets. Your conscious idea of what consistent moral landscape lies behind these bits of 'data' inevitably conflicts with your immediate reactions at some point.

Comment author: conchis 17 April 2009 05:13:43PM *  1 point [-]

I may be misintepreting, but I wonder whether Yvain's use of the word "emotivism" here is leading people astray. He doesn't seem to be committing himself to emotivism as a metaethical theory of what it means to say something is good, as much as an empirical claim about most people's moral psychology (that is, what's going on in their brains when they say things like "X is good"). The empirical claim and the normative commitment to utilitarianism don't seem incompatible. (And the empirical claim is one that seems to be backed up by recent work in moral psychology.)