Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Barkley_Rosser comments on "Inductive Bias" - Less Wrong

21 Post author: Eliezer_Yudkowsky 08 April 2007 07:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Barkley_Rosser 10 April 2007 06:34:00AM 0 points [-]


So, an "optimal prior" is either a subjectively guessed probability or, more optimally, probability distribution that coincides with an objective probability or probability distribution. That is it would equal the posterior distribution one would arrive at after the asymptotic working out of Bayes' Theorem, assuming the conditions for Bayes' Theorem hold.

But, what if those conditions do not hold? Will the "optimal prior" be equal to the "objective truth" or to the distribution that one arrives at after the infinite working out of the posterior adjustment learning process, even assuming that we do not have the sort of inertial slow learning that seems to exist in much of reality?

To give an example of such a non-convergence, consider the sort of example posed by Diaconis and Freeman, with an infinite dimensional space and a disconnected basis, one can end up in a cycle rather than on the mean.