pnrjulius comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: pnrjulius 19 June 2012 04:14:34AM *  -1 points [-]

Factory robots and high-frequency traders are definitely agent AI. They are designed to be, and they frankly make no sense in any other way.

The factory robot does not ask you whether it should move three millimeters to the left; it does not suggest that perhaps moving three millimeters to the left would be wise; it moves three millimeters to the left, because that is what its code tells it to do at this phase in the welding process.

The high-frequency trader even has a utility function: It's called profit, and it seeks out methods of trading options and derivatives to maximize that utility function.

In both cases, these are agents, because they act directly on the world itself, without a human intermediary approving their decisions.

The only reason I'd even hesitate to call them agent AIs is that they are so stupid; the factory robot has hardly any degrees of freedom at all, and the high-frequency trader only has choices between different types of financial securities (it never asks whether it should become an entrepreneur for instance). But this is a question of the AI part; they're definitely agents rather than tools.

I do like your quick thought though:

Quick thought: If it's hard to get AGIs to generate plans that people like, then it would seem that AGIs fall into this exception class, since in that case humans can do a better job of telling whether they like a given plan.

Yes, it makes a good deal of sense that we would want some human approval involved in the process of restructuring human society.

Comment author: Nick_Beckstead 19 June 2012 12:03:53PM 0 points [-]

They're clearly agents given Holden's definitions. Why are they clearly agents given my proposed definition? (Normally I don't see a point in arguing about definitions, but I think my proposed definition lines up with something of interest: things that are especially likely to become dangerous if they're more powerful.)