Vladimir_Nesov comments on Secrets of the eliminati - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (252)
I wonder:
if you had an agent that obviously did have goals (let's say, a player in a game, whose goal is to win, and who plays the optimal strategy) could you deduce those goals from behavior alone?
Let's say you're studying the game of Connect Four, but you have no idea what constitutes "winning" or "losing." You watch enough games that you can map out a game tree. In state X of the world, a player chooses option A over other possible options, and so on. From that game tree, can you deduce that the goal of the game was to get four pieces in a row?
I don't know the answer to this question. But it seems important. If it's possible to identify, given a set of behaviors, what goal they're aimed at, then we can test behaviors (human, animal, algorithmic) for hidden goals. If it's not possible, that's very important as well; because that means that even in a simple game, where we know by construction that the players are "rational" goal-maximizing agents, we can't detect what their goals are from their behavior.
That would mean that behaviors that "seem" goal-less, programs that have no line of code representing a goal, may in fact be behaving in a way that corresponds to maximizing the likelihood of some event; we just can't deduce what that "goal" is. In other words, it's not as simple as saying "That program doesn't have a line of code representing a goal." Its behavior may encode a goal indirectly. Detecting such goals seems like a problem we would really want to solve.
Compare with only ever seeing one move made in such a game, but being able to inspect in detail the reasons that played a role in deciding what move to make, looking for explanations for that move. It seems that even one move might suffice, which goes to show that it's unnecessary for behavior itself to somehow encode agent's goals, as we can also take into account the reasons for the behavior being so and so.