I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.
Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.
I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.
Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.
This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !
Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:
It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.
What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
...Also, what about externalism? This is one of the major elements of modern epistemology, as a resp
ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't. Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not. Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:
Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.
The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:
Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.
"Criticism" here basically means philosophical discussion. Keep in mind that "criticism" as a hostile verbal interaction is a typical cause of failed relationships. If you do nothing but criticize a person, the other person will eventually find it emotionally impossible to spend much time with you. If you want to keep your relationships and do pancritical rationalism, be sure that the criticism that's part of pancriticial rationalism is understood to be offered in a helpful way, not a hostile way, and that you're doing it with a consenting adult. In particular, it has to be clear to all participants that there every available option will, in practice, have at least one valid criticism, so the goal is to choose something with criticisms you can accept, not to find something perfect.
We'll start by listing some typical criticisms of beliefs, and then move on to criticizing preferences and behaviors.
Criticizing beliefs is a special case in several ways. First, you can't judge the criticisms as true or false, since you haven't decided what to believe yet. Second, the process of criticizing beliefs is almost trivial in principle: apply Bayes' rule, starting with some reasonable prior. Neither of these special cases apply to criticizing preferences or behaviors, so pancriticial rationalism provides an especially useful framework for discussing them.
Criticizing beliefs is not trivial in practice, since there are nonrational criticisms of belief, there is more than one reasonable prior, Bayes' rule can be computationally intractable, and in practice people have preexisting non-Bayesian belief strategies that they follow.
With that said, a number of possible criticisms of a belief come to mind:
The last two of these illustrate that the weight one gives to a criticism is subjectively determined. Those last two criticisms are true for many beliefs discussed here, and the last one is true for essentially every belief if you pick the right religious book.
Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:
We can also criticize behavior in at least the following ways:
In all cases, if you're doing or preferring or believing something that has a valid criticism, the response does not necessarily have to be "don't do/prefer/believe that". The response might be "In light of the alternatives I know about and the criticisms of all available alternatives, I accept that".
Of course, another response might be "I don't have time to consider any of that right now", but in that case you are at a level of urgency where this article won't be directly useful to you. You'll have to get yourself straightened out when things are less urgent and make use of that preparation when things are urgent.
Assuming this post doesn't quickly get negative karma, a reasonable next step would be to put a list of criticisms of beliefs, preferences, and behaviors on a not-yet-created LessWrong pancritical rationalism Wiki page. Posting them in comments might also be worthwhile. If someone else could take the initiative to update the Wiki, it would be great. Otherwise I would like to get to it eventually, but that probably won't happen soon.
Question for the readers: Is criticising a decision theory a useful separate category from the three listed above (beliefs, preferences, and behaviors)? If so, what criticisms are relevant?