There's a recent science fiction story that I can't recall the name of, in which the narrator is traveling somewhere via plane, and the security check includes a brain scan for deviance. The narrator is a pedophile. Everyone who sees the results of the scan is horrified--not that he's a pedophile, but that his particular brain abnormality is easily fixed, so that means he's chosen to remain a pedophile. He's closely monitored, so he'll never be able to act on those desires, but he keeps them anyway, because that's part of who he is.
What would you do in his place?
In the language of good old-fashioned AI, his pedophilia is a goal or a terminal value. "Fixing" him means changing or erasing that value. People here sometimes say that a rational agent should never change its terminal values. (If one goal is unobtainable, the agent will simply not pursue that goal.) Why, then, can we imagine the man being tempted to do so? Would it be a failure of rationality?
If the answer is that one terminal value can rationally set a goal to change another terminal value, then either
- any terminal value of a rational agent can change, or
- we need another word for the really terminal values that can't be changed rationally, and a way of identifying them, and a proof that they exist.
Weirded out at the oversharing, obviously.
Assuming the context was one where sharing this somehow fit ... somewhat squicked, but I would probably be squicked by some of their fantasies. That's fantasies.
Oh, and some of the less rational ones might worry that this was an indicator that I was a dangerous psychopath. Probably the same ones who equate "pedophile" with "pedophile who fantasises about kidnap, rape, torture and murder" ,':-. I dunno.
Why is this irrational? Having a fantasy of doing X means your more likely to do X.