If you have an argument why Pancritical rationalism applies better in preferences and behaviors than beliefs, I'm all ears.
We start with a preference or a belief or a behavior (or something else), so we never have a choice between doing pancriticial rationalism with a preference or doing pancritcal rationalism with a belief. Comparing the two is therefore not relevant. What is relevant is whether pancritical rationalism with preferences is worthwhile.
Pancritical rationalism is nontrivial for preferences because we presently have multiple possible criticisms, and none of them conclusively prove that something is wrong with the preference. So the choices I can see are:
We could choose not to not talk about preferences at all. Preferences are important, so that's not good.
We could talk about preferences without understanding the nature of the conversation. The objective morality bullshit that has been argued a few times seems to be a special case of this. I wouldn't want to participate in that again.
We can do pancritical rationalism with preferences.
I would really like a better alternative, but I do not see one.
For beliefs and behaviors, I agree at this point that PCR doesn't give much leverage. We can trivialize PCR for beliefs down to Bayes' rule and choosing a prior. We can trivialize PCR for behaviors down to the rule of choosing the behavior that you believe will best give you your preferences. If you don't want to assume rationality and unbounded computational resources, there might be more criticisms of belief and behavior that are worthwhile, but it's a small win at best and probably not worth talking about given that people don't seem to be getting the main point.
ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't. Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not. Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:
Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.
The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:
Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.
"Criticism" here basically means philosophical discussion. Keep in mind that "criticism" as a hostile verbal interaction is a typical cause of failed relationships. If you do nothing but criticize a person, the other person will eventually find it emotionally impossible to spend much time with you. If you want to keep your relationships and do pancritical rationalism, be sure that the criticism that's part of pancriticial rationalism is understood to be offered in a helpful way, not a hostile way, and that you're doing it with a consenting adult. In particular, it has to be clear to all participants that there every available option will, in practice, have at least one valid criticism, so the goal is to choose something with criticisms you can accept, not to find something perfect.
We'll start by listing some typical criticisms of beliefs, and then move on to criticizing preferences and behaviors.
Criticizing beliefs is a special case in several ways. First, you can't judge the criticisms as true or false, since you haven't decided what to believe yet. Second, the process of criticizing beliefs is almost trivial in principle: apply Bayes' rule, starting with some reasonable prior. Neither of these special cases apply to criticizing preferences or behaviors, so pancriticial rationalism provides an especially useful framework for discussing them.
Criticizing beliefs is not trivial in practice, since there are nonrational criticisms of belief, there is more than one reasonable prior, Bayes' rule can be computationally intractable, and in practice people have preexisting non-Bayesian belief strategies that they follow.
With that said, a number of possible criticisms of a belief come to mind:
The last two of these illustrate that the weight one gives to a criticism is subjectively determined. Those last two criticisms are true for many beliefs discussed here, and the last one is true for essentially every belief if you pick the right religious book.
Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:
We can also criticize behavior in at least the following ways:
In all cases, if you're doing or preferring or believing something that has a valid criticism, the response does not necessarily have to be "don't do/prefer/believe that". The response might be "In light of the alternatives I know about and the criticisms of all available alternatives, I accept that".
Of course, another response might be "I don't have time to consider any of that right now", but in that case you are at a level of urgency where this article won't be directly useful to you. You'll have to get yourself straightened out when things are less urgent and make use of that preparation when things are urgent.
Assuming this post doesn't quickly get negative karma, a reasonable next step would be to put a list of criticisms of beliefs, preferences, and behaviors on a not-yet-created LessWrong pancritical rationalism Wiki page. Posting them in comments might also be worthwhile. If someone else could take the initiative to update the Wiki, it would be great. Otherwise I would like to get to it eventually, but that probably won't happen soon.
Question for the readers: Is criticising a decision theory a useful separate category from the three listed above (beliefs, preferences, and behaviors)? If so, what criticisms are relevant?