Peter_de_Blanc comments on Epistemic vs. Instrumental Rationality: Approximations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (25)
While KL divergence is a very natural measure of the "goodness of approximation" of a probability distribution, which happens not to talk about the utility function, there is still a strong sense in which only an instrumental rationalist can speak of a "better approximation", because only an instrumental rationalist can say the word "better".
KL divergence is an attempt to use a default sort of metric of goodness of approximation, without talking about the utility function, or while knowing as little as possible about the utility function; but in fact, in the absence of a utility function, you actually just can't say the word "better", period.
If epistemic rationalists can't speak of a "better approximation," then how can an epistemic rationalist exist in a universe with finite computational resources?
Pure epistemic rationalists with no utility function? Well, they can't, really. That's part of the problem with the Oracle AI scenario.
They can speak of a “closer approximation” instead. (But that still needs a metric.)