Karl_Smith comments on Open Thread: March 2010 - Less Wrong

5 Post author: AdeleneDawner 01 March 2010 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (658)

You are viewing a single comment's thread. Show more comments above.

Comment author: Karl_Smith 01 March 2010 06:17:03PM *  0 points [-]

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn't begging the question.

Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.

Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.

I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.

I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.

1) The pencil kept going to the same spot as if it had a "goal"

2) The pencil was able to respond to "obstacles" in ways not predicted by my original simply theory of pencil behavior.

I believe that I would say the pencil is more intelligent if it could pass through more "complicated" obstacles.

Here are some of my basic problems

1) What is a "goal" beyond what my intuition says

2) Similarly what is an "obstacle"

3) And what is "complicated"

I have some sense that "obstacle" is related to reducing the probability that the goal will be reached

I have some s that complicated has to do with the degree to which the probability is reduced.

Thoughts? Suggestions for readings?

Comment author: RichardKennaway 01 March 2010 10:27:10PM *  2 points [-]

You are talking about control systems.

A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.

What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.

The answers to your questions are:

  1. A "goal" is the reference input of a control system.

  2. An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.

  3. "Complicated" means "I don't (yet) understand this."

Suggestions for readings.

And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely."

-- William James, "The Principles of Psychology"

Comment author: Karl_Smith 02 March 2010 12:30:16AM *  0 points [-]

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

Comment author: markrkrebs 02 March 2010 01:05:49AM 2 points [-]

The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.

Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.

I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.

Comment author: Karl_Smith 02 March 2010 02:35:51AM *  0 points [-]

I had conceived of something like the Turing test but for intelligence period, not just general intelligence.

I wonder if general intelligence is about the domains under which a control system can perform.

I also wonder whether "minds" is a too limiting criteria for the goals of FAI.

Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.

Maybe this is a more general formulation?

Comment author: RichardKennaway 02 March 2010 08:45:54AM *  0 points [-]

I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer.

Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.

1. LessWrong, passim.

2. Marcus Hutter's Compression Prize.

3. AIXItl and the Gödel machine.