Peter_de_Blanc comments on Epistemic vs. Instrumental Rationality: Approximations - Less Wrong

22 Post author: Peter_de_Blanc 28 April 2009 03:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 28 April 2009 07:25:18AM 9 points [-]

While KL divergence is a very natural measure of the "goodness of approximation" of a probability distribution, which happens not to talk about the utility function, there is still a strong sense in which only an instrumental rationalist can speak of a "better approximation", because only an instrumental rationalist can say the word "better".

KL divergence is an attempt to use a default sort of metric of goodness of approximation, without talking about the utility function, or while knowing as little as possible about the utility function; but in fact, in the absence of a utility function, you actually just can't say the word "better", period.

Comment author: Peter_de_Blanc 28 April 2009 12:46:47PM 4 points [-]

If epistemic rationalists can't speak of a "better approximation," then how can an epistemic rationalist exist in a universe with finite computational resources?

Comment author: Eliezer_Yudkowsky 21 June 2009 04:20:15PM 1 point [-]

Pure epistemic rationalists with no utility function? Well, they can't, really. That's part of the problem with the Oracle AI scenario.

Comment author: [deleted] 24 January 2014 08:35:10PM 0 points [-]

They can speak of a “closer approximation” instead. (But that still needs a metric.)