ReevesAnd comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 14 May 2015 03:09:01PM *  3 points [-]

i agree with the sentiment behind what you say here.

The difficult part is to shake ourselves free of any unexamined, implicit assumptions that we might be bringing to the table, when we talk about the problem.

For example, when you say:

And this is the reason why we should be worried about an AI with a poorly made utility function

... you are talking in terms of an AI that actually HAS such a thing as a "utility function". And it gets worse: the idea of a "utility function" has enormous implications for how the entire control mechanism (the motivations and goals system) is designed.

A good deal of this debate about my paper is centered in a clash of paradigms: on the one side a group of people who cannot even imagine the existence of any control mechanism except a utility-function-based goal stack, and on the other side me and a pretty large community of real AI builders who consider a utility-function-based goal stack to be so unworkable that it will never be used in any real AI.

Other AI builders that I have talked to (including all of the ones who turned up for the AAAI symposium where this paper was delivered, a year ago) are unequivocal: they say that a utility-function-and-goal-stack approach is something they wouldn't dream of using in a real AI system. To them, that idea is just a piece of hypothetical silliness put into AI papers by academics who do not build actual AI systems.

And for my part, I am an AI builder with 25 years experience, who was already rejecting that approach in the mid-1980s, and right now I am working on mechanisms that only have vague echoes of that design in them.

Meanwhile, there are very few people in the world who also work on real AGI system design (they are a tiny subset of the "AI builders" I referred to earlier), and of the four others that I know (Ben Goertzel, Peter Voss, Monica Anderson and Phil Goetz) I can say for sure that the first three all completely accept the logic in this paper. (Phil's work I know less about: he stays off the social radar most of the time, but he's a member of LW so someone could ask his opinion).

Comment author: ReevesAnd 14 May 2015 08:17:17PM 3 points [-]

Could you describe some of the other motivation systems for AI that are under discussion? I imagine they might be complicated, but is it possible to explain them to someone not part of the AI building community?

Comment author: Richard_Loosemore 14 May 2015 08:59:31PM 3 points [-]

AFAIK most people build planning engines that use multiple goals, plus what you might call "ad hoc" machinery to check on that engine. So in other words, you might have something that comes up with a plan but then a whole bunch of stuff that analyses the plan.

My own approach is very different. Coming up with a plan is not a linear process, but involves large numbers of constraints acting in parallel. If you know about how a neural net goes from a large array of inputs (e.g. a visual field) to smaller numbers of hidden units that encode more and more abstract descriptions of the input, until finally you get some high level node being activated .... then if you picture that process happening in reverse, with a few nodes being highly activated, then causing more and more low level nodes to come up, that gives a rough idea of how it works.

In practice all that the above means is that the maximum possible quantity of contextual information acts on the evolving plan. And that is critical.