Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Richard_Hollerith2 comments on Terminal Values and Instrumental Values - Less Wrong

54 Post author: Eliezer_Yudkowsky 15 November 2007 07:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Richard_Hollerith2 15 November 2007 09:17:49PM 0 points [-]

it's only "terminal" in the sense that that's where you choose to stop calculating..

No, the way Eliezer is using "terminal value", only the positions that are wins, losses or draws are terminal values for the chess-playing agent.

So wouldn't it be true that a "terminal value" just means a point at which we've chosen to stop calculating, rather than saying something about the situation itself?

Neither. A terminal value says something about the preferences of the intelligent agent.

And Eliezer asked us to imagine for a moment a hypothetical agent that never "stops calculating" until the rules of the game say the game is over. That is what the following text was for.

This is a mathematically simple sketch of a decision system. It is not an efficient way to compute decisions in the real world.

Suppose, for example, that you need a sequence of acts to carry out a plan? The formalism can easily represent this by letting each Action stand for a whole sequence. But this creates an exponentially large space, like the space of all sentences you can type in 100 letters. As a simple example, if one of the possible acts on the first turn is "Shoot my own foot off", a human planner will decide this is a bad idea generally - eliminate all sequences beginning with this action. But we've flattened this structure out of our representation. We don't have sequences of acts, just flat "actions".

So, yes, there are a few minor complications. Obviously so, or we'd just run out and build a real AI this way. In that sense, it's much the same as Bayesian probability theory itself.

But this is one of those times when it's a surprisingly good idea to consider the absurdly simple version before adding in any high-falutin' complications.