endoself comments on Clarification of AI Reflection Problem - Less Wrong

19 Post author: paulfchristiano 10 December 2011 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: endoself 11 December 2011 01:50:29AM 1 point [-]

For example, it is easy to see that an agent should not believe that its own beliefs are well-calibrated on all questions

Hmm. If it believed itself to be well-calibrated on questions where it is certain we have Loeb's paradox, but are there any obvious problems with an agent that thinks it is well-calibrated on all questions where it is not certain?

Long Telomeres

Nice choice of phrase.