jessicat comments on more on predicting agents - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
Game theory predicts that in some cases, an agent with a fixed utility function will randomize its actions (for example, the Nash equilibrium strategy for rock paper scissors is to randomize equally between all 3). If true randomness is unavailable, an agent may use its computational power to compute expensive pseudorandom numbers that other agents will have difficulty also computing. There is no need for the agent to change its utility function. Changing its utility function would be likely to cause the agent to optimize for different things than the previous utility function would optimize for; therefore, if the agent is acting according to the original utility function, changing the utility function is unlikely to be considered a good action.
Given that changing your utility function is generally a bad thing for a utility maximizer, it does not seem like this will happen. Instead, it seems more likely that the agent's modeling ability will improve and this will change its observed behavior, possibly making it less predictable. You can often change an agent's behavior quite a lot by changing its beliefs.
There is certainly the important issue of deciding what the old utility function, defined relative to the agent's old model of the world, means if the agent's model of the world changes, as explored in this paper, but this does not lead to the agent taking on a fundamentally different utility function, only a faithful representation of the original one.