endoself comments on Clarification of AI Reflection Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (8)
Hmm. If it believed itself to be well-calibrated on questions where it is certain we have Loeb's paradox, but are there any obvious problems with an agent that thinks it is well-calibrated on all questions where it is not certain?
Nice choice of phrase.