This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else's wild guesses about this value as correct or not correct at all.
On the other hand, it doesn't make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can't be correct or incorrect, can't be more or less well-calibrated -- talking this way would indicate a conceptual confusion.
When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn't make sense for similar reasons.
Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn't make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn't itself an approximation, and so can't be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can't be too high or too low.
I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly - I'll grant that's not a binary distinction, but it's a useful one.