conchis comments on Epistemic vs. Instrumental Rationality: Approximations - Less Wrong

22 Post author: Peter_de_Blanc 28 April 2009 03:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 28 April 2009 07:25:18AM 9 points [-]

While KL divergence is a very natural measure of the "goodness of approximation" of a probability distribution, which happens not to talk about the utility function, there is still a strong sense in which only an instrumental rationalist can speak of a "better approximation", because only an instrumental rationalist can say the word "better".

KL divergence is an attempt to use a default sort of metric of goodness of approximation, without talking about the utility function, or while knowing as little as possible about the utility function; but in fact, in the absence of a utility function, you actually just can't say the word "better", period.

Comment author: conchis 28 April 2009 09:14:57AM 1 point [-]

This is basically right, but I guess I think of it in slightly different terms. The KL divergence embodies a particular, implicit utility function, which just happens to be wrong lots of the time. So it can make sense to speak of "better_KL", it's just not something that's necessarily very useful.

Note also that alternative divergence measures, embodying different implicit utility functions, could give different answers. For example, Jensen-Shannon divergence would agree with instrumental rationality here, no? (Though you could obviously construct examples where it too would diverge from our actual utility functions.)