RichardKennaway comments on The ideas you're not ready to post - Less Wrong

24 Post author: JulianMorrison 19 April 2009 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (253)

You are viewing a single comment's thread.

Comment author: RichardKennaway 22 April 2009 11:01:52AM *  12 points [-]

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

  • Inner conflict is, literally, a conflict between control systems that are trying to hold the same variable in two different states.

  • How control systems behave is not intuitively obvious, until one has studied control systems.

This is the only approach to the study of human nature I have encountered that does not appear to me to mistake what it looks like from the inside for the underlying mechanism.

What say you all? Vote this up or down if you want, but comments will be more useful to me.

Comment author: rhollerith 22 April 2009 12:40:21PM 2 points [-]

Heck yeah, I want to see it. I suggest adopting Eliezer's modus operandi of using a lot of words. And every time you see something in your draft post that might need explanation, post on that topic first.

Comment author: pjeby 22 April 2009 02:46:44PM 1 point [-]

I agree with some of your points -- well, all of them if we're discussing control systems in general -- but a couple of them don't quite apply to brains, as the cortical systems of brains in general (not just in humans) do use predictive models in order to implement both perception and behavior. Humans at least can also run those models forward and backward for planning and behavior generation.

The other point, about actions determining perceptions, is "sorta" true of brains, in that eye saccades are a good example of that concept. However, not all perception is like that; frogs for example don't move their eyes, but rely on external object movement for most of their sight.

So I think it'd be more accurate to say that where brains and nervous systems are concerned, there's a continuous feedback loop between actions, perceptions, and models. That is, models drive actions, actions generate raw data that's filtered through a model to become a perception, that may update one or more models.

Apart from that though, I'd say that your other three points apply to people and animals quite well.

Comment author: cousin_it 22 April 2009 11:09:26AM *  1 point [-]

I'd love to see this as a top-level post. Here's additional material for you: online demos of perceptual control theory, Braitenberg vehicles.

Comment author: RichardKennaway 22 April 2009 09:47:08PM *  0 points [-]

I know the PCT site :-) It was Bill Powers' first book that introduced me to PCT. Have you tried the demos on that site yourself?

Comment author: cousin_it 23 April 2009 09:41:55AM *  0 points [-]

Yes, I went through all of them several years ago. Like evolutionary psychology, the approach seems to be mostly correct descriptively, even obvious, but not easy to apply to cause actual changes. (Of course utility function-based approaches are much worse.)

Comment author: Vladimir_Nesov 22 April 2009 02:41:59PM 0 points [-]

Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

But they should act according to a rigorous decision theory, even though they often don't. It seems to be an elementary enough statement, so I'm not sure what are you asserting.

Comment author: cousin_it 23 April 2009 09:47:21AM *  1 point [-]

"Should" statements cannot be logically derived from factual statements. Population evolution leads to evolutionarily stable strategies, not coherent decision theories.

Comment author: Vladimir_Nesov 23 April 2009 11:46:52AM *  0 points [-]

"Should" statements come from somewhere, somewhere in the world (I'm thinking about that in the context of something close to "The Meaning of Right"). Why do you mention evolution?

Comment author: cousin_it 23 April 2009 08:57:52PM *  1 point [-]

In that post Eliezer just explains in his usual long-winded manner that morality is our brain's morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn't. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?

(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer's teachings, back into ordinary common sense. Just so you know.)

Comment author: Vladimir_Nesov 23 April 2009 09:05:28PM *  0 points [-]

To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.

Comment author: cousin_it 23 April 2009 09:19:34PM *  0 points [-]

Yes, an agent with a well-defined utility function "should" act to maximize it with a rigorous decision theory. Well, I'm glad I'm not such an agent. I'm very glad my life isn't governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

Comment author: Vladimir_Nesov 23 April 2009 09:38:39PM *  0 points [-]

Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!

Comment author: ArisKatsaris 14 April 2011 03:04:20PM 1 point [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

What if some people don't prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?

Comment author: JGWeissman 15 April 2011 03:24:32AM 3 points [-]

Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn't have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.

Comment author: wedrifid 14 April 2011 04:57:14PM 2 points [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.

Comment author: Vladimir_Nesov 14 April 2011 03:36:08PM 1 point [-]

Then how would you ever know? Rational ignorance is really hard.

Comment author: Daniel_Burfoot 22 April 2009 01:17:29PM *  0 points [-]

I don't necessarily believe you, but I would be happy to read what you write :-) I would also be happy to learn more about control theory. To comment further would require me to touch on unmentionable subjects.

Comment author: JulianMorrison 22 April 2009 01:17:37PM 0 points [-]

it sounds like you want to write a book! But a post would be much appreciated.

Comment author: RichardKennaway 22 April 2009 09:49:00PM 1 point [-]

There are several books already on the particular take on control theory that I intend to write about, so I'm just thinking in terms of blog posts, and keeping them relevant to the mission of LW. I've just realised I have a shortage of evenings for the rest of this week, so it may take some days before I can take a run at it.