JoshuaZ comments on Pancritical Rationalism Can Apply to Preferences and Behavior - Less Wrong

1 Post author: TimFreeman 25 May 2011 12:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 25 May 2011 01:53:51PM *  5 points [-]

I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.

Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:

It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.

One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.

Comment author: JoshuaZ 25 May 2011 04:01:32PM 1 point [-]

One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses.

I don't think this is true. Aumann's agreement theorem shows that this is true in the limiting case assuming an infinite string of evidence. However, this isn't the case for any finite amount of evidence. Indeed, simply choose different versions of the Solomonoff prior (different formulations of Turing machines change the Kolmogorov complexity by at most a constant but that still changes the Solomonoff priors. It just means that two different sets of priors need to look similar overall.)

Comment author: lessdazed 25 May 2011 10:23:37PM 0 points [-]

Would a similar statement couched in terms of limits be true?

As an agent's computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.

Comment author: TimFreeman 25 May 2011 10:51:27PM 1 point [-]

Would a similar statement couched in terms of limits be true?

As an agent's computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.

The limit you proposed doesn't help. One's beliefs after applying Bayes' rule are determined by the prior and by the evidence. We're talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn't enter into it.