How should rational agents think about their less-than-rational creation process?
Suppose you are an AI who was designed by a drunk programmer. Your prior contains an "optimism" parameter which broadly skews how you see the world -- set it to -100 and you'd expect world-ending danger around every corner, while +100 would make you expect heaven around every corner. Although your powerful learning algorithm allows you to accurately predict the world, the optimism/pessimism bias never fully goes away: it skews your views about anything you don't know.
Unfortunately for you, your programmer set the parameter randomly, rather than attempting to figure out which setting was most accurate or useful. You know for a fact they just mashed the num pad randomly....