# Eliezer_Yudkowsky comments on "Inductive Bias" - Less Wrong

21 08 April 2007 07:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

You are viewing a single comment's thread.

Comment author: 08 April 2007 09:21:30PM 1 point [-]

Priors don't update. That's why they're called "priors".

Marginal posterior probabilities update; this is learning. Inductive priors over sequences don't update; they are what does the updating, they define your capability to learn. Even if you are a self-modifying AI and can rewrite your own source code, from a Bayesian perspective this is simply folded into an inductive prior over sequences of observations. I previously tried to write a post on this topic, but it got way too long and is now in my backlog of essays to finish someday.

This is exactly what I was trying to get at by distinguishing between the statement, "The marginal probability of drawing a red ball on the third round is 50%", which is true in all three scenarios above; versus the prior distributions over sequences of observations, which are different.

The inductive prior defines your responses to sequences of observations. This does not change over time; it is outside time. Learning how to learn is simply folded into the joint probability distribution.

Comment author: 29 June 2010 05:23:42PM 3 points [-]

Priors don't update. That's why they're called "priors".

• John shows up on time for meetings 30%.
• John has been reprimanded.
• I think there is 95% chance he will be on time for meetings from now on.

You could just say that 95% is my prior for P(OnTime|Reprimanded), but I am not sure people think this way; "prior has been updated" seems more appropriate (when the condition is history).

Comment author: 29 March 2011 03:02:41PM 0 points [-]

Just call it your "current belief".