Vladimir_Nesov comments on The ideas you're not ready to post - Less Wrong

24 Post author: JulianMorrison 19 April 2009 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (253)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 22 April 2009 11:01:52AM *  12 points [-]

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

  • Inner conflict is, literally, a conflict between control systems that are trying to hold the same variable in two different states.

  • How control systems behave is not intuitively obvious, until one has studied control systems.

This is the only approach to the study of human nature I have encountered that does not appear to me to mistake what it looks like from the inside for the underlying mechanism.

What say you all? Vote this up or down if you want, but comments will be more useful to me.

Comment author: Vladimir_Nesov 22 April 2009 02:41:59PM 0 points [-]

Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

But they should act according to a rigorous decision theory, even though they often don't. It seems to be an elementary enough statement, so I'm not sure what are you asserting.

Comment author: cousin_it 23 April 2009 09:47:21AM *  1 point [-]

"Should" statements cannot be logically derived from factual statements. Population evolution leads to evolutionarily stable strategies, not coherent decision theories.

Comment author: Vladimir_Nesov 23 April 2009 11:46:52AM *  0 points [-]

"Should" statements come from somewhere, somewhere in the world (I'm thinking about that in the context of something close to "The Meaning of Right"). Why do you mention evolution?

Comment author: cousin_it 23 April 2009 08:57:52PM *  1 point [-]

In that post Eliezer just explains in his usual long-winded manner that morality is our brain's morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn't. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?

(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer's teachings, back into ordinary common sense. Just so you know.)

Comment author: Vladimir_Nesov 23 April 2009 09:05:28PM *  0 points [-]

To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.

Comment author: cousin_it 23 April 2009 09:19:34PM *  0 points [-]

Yes, an agent with a well-defined utility function "should" act to maximize it with a rigorous decision theory. Well, I'm glad I'm not such an agent. I'm very glad my life isn't governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

Comment author: Vladimir_Nesov 23 April 2009 09:38:39PM *  0 points [-]

Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!

Comment author: ArisKatsaris 14 April 2011 03:04:20PM 1 point [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

What if some people don't prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?

Comment author: JGWeissman 15 April 2011 03:24:32AM 3 points [-]

Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn't have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.

Comment author: wedrifid 14 April 2011 04:57:14PM 2 points [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.

Comment author: Vladimir_Nesov 14 April 2011 03:36:08PM 1 point [-]

Then how would you ever know? Rational ignorance is really hard.