Eliezer_Yudkowsky comments on Priors as Mathematical Objects - Less Wrong

24 Post author: Eliezer_Yudkowsky 12 April 2007 03:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 12 April 2007 07:04:59AM 4 points [-]

Hal, with a poor prior, "Bayesian updating" can lead to learning in the wrong direction or to no learning at all. Bayesian updating guarantees a certain kind of consistency, but not correctness. (If you have five city maps that agree with each other, they might still disagree with the city.) You might think of Bayesian updating as a kind of lower level of organization - like a computer chip that runs programs, or the laws of physics that run the computer chip - underneath the activity of learning. If you start with a maxentropy prior that assigns equal probability to every sequence of observations, and carry out strict Bayesian updating, you'll still never learn anything; your marginal probabilities will never change as a result of the Bayesian updates. Conversely, if you somehow had a good prior but no Bayesian engine to update it, you would stay frozen in time and no learning would take place. To learn you need a good prior and an updating engine. Taking a picture requires a camera, light - and also time.

This probably deserves its own post.