conchis comments on Epistemic vs. Instrumental Rationality: Approximations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (25)
While KL divergence is a very natural measure of the "goodness of approximation" of a probability distribution, which happens not to talk about the utility function, there is still a strong sense in which only an instrumental rationalist can speak of a "better approximation", because only an instrumental rationalist can say the word "better".
KL divergence is an attempt to use a default sort of metric of goodness of approximation, without talking about the utility function, or while knowing as little as possible about the utility function; but in fact, in the absence of a utility function, you actually just can't say the word "better", period.
This is basically right, but I guess I think of it in slightly different terms. The KL divergence embodies a particular, implicit utility function, which just happens to be wrong lots of the time. So it can make sense to speak of "better_KL", it's just not something that's necessarily very useful.
Note also that alternative divergence measures, embodying different implicit utility functions, could give different answers. For example, Jensen-Shannon divergence would agree with instrumental rationality here, no? (Though you could obviously construct examples where it too would diverge from our actual utility functions.)