Karl_Smith comments on Open Thread: March 2010 - Less Wrong

5 Post author: AdeleneDawner 01 March 2010 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (658)

You are viewing a single comment's thread. Show more comments above.

Comment author: Karl_Smith 02 March 2010 12:30:16AM *  0 points [-]

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

Comment author: markrkrebs 02 March 2010 01:05:49AM 2 points [-]

The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.

Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.

I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.

Comment author: Karl_Smith 02 March 2010 02:35:51AM *  0 points [-]

I had conceived of something like the Turing test but for intelligence period, not just general intelligence.

I wonder if general intelligence is about the domains under which a control system can perform.

I also wonder whether "minds" is a too limiting criteria for the goals of FAI.

Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.

Maybe this is a more general formulation?

Comment author: RichardKennaway 02 March 2010 08:45:54AM *  0 points [-]

I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer.

Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.

1. LessWrong, passim.

2. Marcus Hutter's Compression Prize.

3. AIXItl and the Gödel machine.