Here is a new paper of mine (12 pages) on suspicious agreement between belief and values. The idea is that if your empirical beliefs systematically support your values, then that is evidence that you arrived at those beliefs through a biased belief-forming process. This is especially so if those beliefs concern propositions which aren’t probabilistically correlated with each other, I argue.
I have previously written several LW posts on these kinds of arguments (here and here; see also mine and ClearerThinking’s political bias test) but here the analysis is more thorough. See also Thrasymachus' recent post on the same theme.
Having beliefs and values converge to the truth is the desired outcome. The trick is knowing if the convergence is to the truth, or just the shortest line projection between the two.
Whether in science or law, truth producing activities tend to be adversarial. If done honestly and with commitment, wtih capable adversaries, that's a pretty good system. If you care enough to spend the effort, and have capable and similarly committed adversaries available, I think that's a much better recipe for coming to the truth than stewing in the juices of your own beliefs and the beliefs of your tribe.
Having your beliefs converge on the truth is the desired outcome.
Values don't have a truthiness property. If your beliefs and your values converge, something else is going on.