Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on Do Humans Want Things? - Less Wrong

23 Post author: lukeprog 04 August 2011 05:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 11 August 2011 07:56:10PM *  2 points [-]

Ah, but it can't encode value for the objective intensities of stimuli, because the brain doesn't have that information.

The brain has (some measure of reference/access to) that information, just not in that particular form. And if it has (reference to) that information, it's not possible to conclude that motivation doesn't refer to it. It just doesn't refer to it through exclusively the form of representation that doesn't have the information, but then it would be very surprising if motivation compartmentalized so.

Comment author: lukeprog 11 August 2011 08:03:25PM 0 points [-]

Right. I guess I'm struggling for a concise way to say what I'm trying to say, and hoping you'll interpret me correctly based on the long paragraphs I've written explaining what I mean by these shorter sentences. Maybe something like:

"Whaddyaknow, we discovered a mechanism that actually encodes value for stimuli with neuron firing rates! Ah, but this particular mechanism can't encode value for the objective intensities of stimuli, because this mechanism discards that information at the transducer. So that constrains our theories about the motivation of human behavior."

Comment author: Vladimir_Nesov 11 August 2011 08:16:56PM *  1 point [-]

we discovered a mechanism that actually encodes value for stimuli with neuron firing rates!

Also, this doesn't sound right. Why is that behavioral pattern "value"? Maybe it should be edited out of the system, like pain, or reversed, or modified in some complicated way.

Comment author: Vladimir_Nesov 11 August 2011 08:10:36PM *  1 point [-]

Doesn't really help. The problem is that (normative) motivation is the whole thing, particular (unusual) decisions can be formed by any component, so it's unclear how to rule out stuff on the basis of properties of particular better-understood components.

Behavior is easier to analyze, you can see which factors contribute how much, and in this sense you can say that particular classes of behavior are determined mostly by this here mechanism that doesn't have certain data, and so behavior is independent from that data. But such conclusions won't generalize to normative motivation, because prevailing patterns of behavior might be suboptimal, and it's possible to improve them (by exercising the less-prevalent modes of behavior that are less understood), making them depend on things that they presently don't depend on.

Comment author: lukeprog 11 August 2011 08:20:07PM 1 point [-]

What do you mean by 'normative motivation'?

Comment author: Vladimir_Nesov 11 August 2011 08:23:48PM *  1 point [-]

Considerations that should motivate you. What do you mean by "motivation"?

Comment author: lukeprog 11 August 2011 10:46:36PM *  4 points [-]

Uh oh. How did 'should' sneak its way into our discussion? I'm just talking about positive accounts of human motivation.

Until data give us a clearer picture of what we're talking about, 'motivation' is whatever drives (apparently) goal-seeking behavior.

Comment author: Vladimir_Nesov 11 August 2011 11:19:43PM *  0 points [-]

Uh oh. How did 'should' sneak its way into our discussion? I'm just talking about positive accounts of human motivation.

I guess the objection I have is to calling the behavioral summary "motivation", a term that has normative connotations (similarly, "value", "desire", "wants", etc.). Asking "Do we really want X?" (as in, does a positive account of some notion of "wanting" say that we "want" X, to the best of our scientific knowledge) sounds too similar to asking "Should we pursue X?" or even "Can we pursue X?", but is a largely unrelated question with similarly unrelated answers.

Comment author: lukeprog 12 August 2011 12:45:27AM *  5 points [-]

I'm using these terms the way they are standardly used in the literature. If you object to the common usage, perhaps you could just read my articles with the assumption that I'm using these words the way neuroscientists and psychologists do, and then state your concerns about the standard language in the comments? I can't rewrite my articles for each reader who has their own peculiar language preferences...

Comment author: Vladimir_Nesov 13 August 2011 12:30:51PM *  0 points [-]

The real question is, do you agree with my characterization of the intended meaning of these intentionality-scented words (as used in particularly this article, say) as being mostly unrelated to normativity, that is to FAI-grade machine ethics? It is unclear to me if you agree or not. If there is some connection, what is it? It is also unclear to me how confusing or clear this question appears to other readers.

(On the other hand, who or what bears the blame for my (or others') peculiar confusions is uninteresting.)

Comment author: lukeprog 14 August 2011 08:35:54PM *  1 point [-]

I don't recall bringing up the issue of blame. All I'm saying is that I don't have time to write a separate version of each post to accomodate each person's language preferences, so I'm usually going to use the standard language used by the researchers in the field I'm discussing.

Words like 'motivation', 'value', 'desire', 'want' don't have normative connotations in my head when I'm discussing them in the context of descriptivist neuroscience. The connotations in your brain may vary. I'm trying to discuss merely descriptive issues; I intend to start using descriptive facts to solve normative problems later. For now, I want to focus on getting a correct descriptive understanding of the system that causes humans do what they do before applying that knowledge to normative questions about what humans should do or what a Friendly AI should do.

Does that make sense?