what's your prior for my statement being true of a randomly chosen person
Sufficiently close to zero.
what's your prior for a randomly chosen statement I make about my preferences being true
Depends on the meaning of "true". In the meaning of "you believe that at the moment", my prior is fairly high -- that is, I don't think you're playing games here. In the meaning of "you will choose that when you will actually have to choose" my prior is noticeably lower -- I'm not willing to assume your picture of yourself is correct.
(nods) cool, that's what I figured initially, but it seemed worth confirming.
There's a recent science fiction story that I can't recall the name of, in which the narrator is traveling somewhere via plane, and the security check includes a brain scan for deviance. The narrator is a pedophile. Everyone who sees the results of the scan is horrified--not that he's a pedophile, but that his particular brain abnormality is easily fixed, so that means he's chosen to remain a pedophile. He's closely monitored, so he'll never be able to act on those desires, but he keeps them anyway, because that's part of who he is.
What would you do in his place?
In the language of good old-fashioned AI, his pedophilia is a goal or a terminal value. "Fixing" him means changing or erasing that value. People here sometimes say that a rational agent should never change its terminal values. (If one goal is unobtainable, the agent will simply not pursue that goal.) Why, then, can we imagine the man being tempted to do so? Would it be a failure of rationality?
If the answer is that one terminal value can rationally set a goal to change another terminal value, then either