JoshuaZ comments on Pancritical Rationalism Can Apply to Preferences and Behavior - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.
What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.
I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.
This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !
It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.
I don't think this is true. Aumann's agreement theorem shows that this is true in the limiting case assuming an infinite string of evidence. However, this isn't the case for any finite amount of evidence. Indeed, simply choose different versions of the Solomonoff prior (different formulations of Turing machines change the Kolmogorov complexity by at most a constant but that still changes the Solomonoff priors. It just means that two different sets of priors need to look similar overall.)
Would a similar statement couched in terms of limits be true?
As an agent's computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.
The limit you proposed doesn't help. One's beliefs after applying Bayes' rule are determined by the prior and by the evidence. We're talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn't enter into it.