ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't.  Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not.  Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:

Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.

The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:

  • Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
  • Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
  • Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.

Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.

"Criticism" here basically means philosophical discussion. Keep in mind that "criticism" as a hostile verbal interaction is a typical cause of failed relationships. If you do nothing but criticize a person, the other person will eventually find it emotionally impossible to spend much time with you. If you want to keep your relationships and do pancritical rationalism, be sure that the criticism that's part of pancriticial rationalism is understood to be offered in a helpful way, not a hostile way, and that you're doing it with a consenting adult. In particular, it has to be clear to all participants that there every available option will, in practice, have at least one valid criticism, so the goal is to choose something with criticisms you can accept, not to find something perfect.

We'll start by listing some typical criticisms of beliefs, and then move on to criticizing preferences and behaviors.

Criticizing beliefs is a special case in several ways. First, you can't judge the criticisms as true or false, since you haven't decided what to believe yet. Second, the process of criticizing beliefs is almost trivial in principle: apply Bayes' rule, starting with some reasonable prior. Neither of these special cases apply to criticizing preferences or behaviors, so pancriticial rationalism provides an especially useful framework for discussing them.

Criticizing beliefs is not trivial in practice, since there are nonrational criticisms of belief, there is more than one reasonable prior, Bayes' rule can be computationally intractable, and in practice people have preexisting non-Bayesian belief strategies that they follow.

With that said, a number of possible criticisms of a belief come to mind:

  • Perhaps it contains self-contradictions.
  • Perhaps it cannot arrived at by starting with a reasonably unbiased prior and doing updates according to Bayes' rule. (As a special case, perhaps it is contradicted by available evidence.)
  • Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.
  • Perhaps it does not make predictions about the world.
  • Perhaps it is really a preference or a behavior. ("I believe in free speech" or "I believe I'll have another drink.")
  • Perhaps it is unpopular.
  • Perhaps it is inconsistent with some ancient religious book or another.

The last two of these illustrate that the weight one gives to a criticism is subjectively determined. Those last two criticisms are true for many beliefs discussed here, and the last one is true for essentially every belief if you pick the right religious book.

Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:

  • Perhaps it is not consistent with your beliefs about cause-and-effect. That is, the preference prefers X over Y and also prefers the expected consequences of Y over the expected consequences of X.
  • Perhaps it cannot be used to actually decide what to do. There are several subcases here:
    • Perhaps it has mathematical properties that break some decision theories, such as an unbounded utility. Concerns about actual known breakage or conjectured breakage are two different criticisms.
    • Perhaps it is defined in such a way that what you prefer depends on things you cannot know.
    • Perhaps it gives little guidance, that its, it considers many pairs of outcomes that you expect to actually encounter as equally preferable.
  • Perhaps the stated preference is ineffective or counterproductive as a social signal. There are several subcases here:
    • Perhaps it is psychologically implausible. That is, perhaps it is so unlikely that a human would hold such a preference that stating the preference to others will lead the others to reasonably conclude that you're a liar or confused, rather than leading them to conclude that you have the given preference.
    • Perhaps it does not help others to predict your behavior. For example, it may require complicated decisions based on debatable guesses about the remote consequences of one's actions.
    • Perhaps it is not something that anybody else would want to cooperate with.
    • Perhaps it is at cross-purposes with the specific people you want to signal to.
  • Perhaps the preference does not include preferring that you want to stay alive enough, so one would expect the preference to select itself out if there's enough time and selection pressure. ("Selection" here might mean biological evolution or some sort of technological process, take your pick based on your beliefs.)
  • Perhaps the preference does not include preferring that you accumulate enough power to actually do anything important.
  • If you believe in objective morality, perhaps the preference is inconsistent with objective morality. Someone who does believe in objective morality should fill in the details here.
  • Perhaps a preference is likely to have problems because it is held by only a non-controlling minority of the persons mind. This can happen in several ways:
    • Perhaps a preference is likely to be self-deception because it is being claimed only because of a philosopical position, and not as a consequence of introspection or generalization from observed behavior.
    • Perhaps a preference is likely to be self-decpetion because it is being claimed only because of introspection, and we expect introspection to yield socially convenient lies.
    • Perhaps a claimed preference is likely to be poorly thought out because it arose nonverbally and has not been reflected upon.
  • Perhaps a preference is an overt deception, that is, the person claiming it knows they do not hold it. This criticism can be used by a person against themselves if they know they are lying and want clarity, or used by others against a person if the person is a poor liar.
  • Perhaps a preference has short-term terminal values that aren't also instrumental values.

We can also criticize behavior in at least the following ways:

  • Perhaps the behavior is not consistent with any reasonable guess about your preferences.
  • Perhaps the behavior is not consistent with your actual statements about your preferences.
  • Perhaps the behavior does not promote personal survival.
  • Perhaps the behavior is undesired by others, that is, others would prefer that you not do it.
  • Perhaps you did not take into account your own preferences about the outcome for others at the time you did the behavior.
  • Perhaps the behavior leads to active conflict with others, that is, in addition to it being against the preferences of others, it motivates them to act against you.
  • Perhaps the behavior will lead others to exploit you.
  • Perhaps you didn't take into account some of the important consequences of the behavior when you chose it.

In all cases, if you're doing or preferring or believing something that has a valid criticism, the response does not necessarily have to be "don't do/prefer/believe that". The response might be "In light of the alternatives I know about and the criticisms of all available alternatives, I accept that".

Of course, another response might be "I don't have time to consider any of that right now", but in that case you are at a level of urgency where this article won't be directly useful to you. You'll have to get yourself straightened out when things are less urgent and make use of that preparation when things are urgent.

Assuming this post doesn't quickly get negative karma, a reasonable next step would be to put a list of criticisms of beliefs, preferences, and behaviors on a not-yet-created LessWrong pancritical rationalism Wiki page. Posting them in comments might also be worthwhile. If someone else could take the initiative to update the Wiki, it would be great. Otherwise I would like to get to it eventually, but that probably won't happen soon.

Question for the readers: Is criticising a decision theory a useful separate category from the three listed above (beliefs, preferences, and behaviors)? If so, what criticisms are relevant?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 3:46 PM

If you're suggesting using this as a basis for knowledge etc. then it seems that Bayes already has it covered.

If your suggesting this as a general method for epistemology in a non-foundational sense, then it seems like everyone already knows this. "Science holds its own ideas up to rigorous scrutiny and testing etc." It's a wishy-washy suggestion with no technical support. Contrast with Technical Explanation... .

I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.

Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:

It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.

One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.

One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses.

I don't think this is true. Aumann's agreement theorem shows that this is true in the limiting case assuming an infinite string of evidence. However, this isn't the case for any finite amount of evidence. Indeed, simply choose different versions of the Solomonoff prior (different formulations of Turing machines change the Kolmogorov complexity by at most a constant but that still changes the Solomonoff priors. It just means that two different sets of priors need to look similar overall.)

Would a similar statement couched in terms of limits be true?

As an agent's computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.

Would a similar statement couched in terms of limits be true?

As an agent's computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.

The limit you proposed doesn't help. One's beliefs after applying Bayes' rule are determined by the prior and by the evidence. We're talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn't enter into it.

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word "belief". In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don't have a mind that can be considered as a separate object, so we don't know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

I don't see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn't adopt it. Bayes' rule is probably one of those beliefs, for example.

Without a constraining external metric, there are many consistent sets [of preferences], and the only criticism you can ultimately bring to bear is one of inconsistency.

I presently believe there are many consistent sets of preferences, and maybe you do too. If that's true, we should find a way to live with it, and the OP is proposing such a way.

I don't know what the word "ultimately" means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?

It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject.

Wrong Externalism

Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know.

The two examples I gave are well known and well studied theories, held by large numbers of philosophers. Indeed, more philosophers accept Externalism than any other theory of justification. Any essay that argues for a position on the basis of the failure of some alternatives, without considering the most popular alternatives, is going to be unconvincing. If you were a biologist, presenting a new theory of evolution, you would be forgiven for not comparing it to Intelligent Design; however, omitting to compare it to NeoDarwinism would be a totally different issue. All you've done is present two straw man theories, and make pancriticial rationalism look good in comparison.

What did you mean? (by 'ultimately')

That all the criticisms you listed can be reduced to criticisms of inconsistency – generally by appending the phrase ‘and you prefer this not to happen’ to them.

How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

I don't know what exactly "justify" is supposed to mean, but I'll interpret it as "show to be useful for helping me win." In that case, it's simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That's all.

To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of "true" and "justified" probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake - essential ones! In the end, if you dissolve "truth" it just ends up meaning something like "seemingly reliable guidepost for my actions."

If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

If he believes belief are only justified by experience, that could a problem. Otherwise, he could use reductio, analysis, abduction, all sort of things.

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

Yes, Barrtley's justification munges together to different ideas:

1) beliefs can only be justified by other beliefs 2) beliefs can be positively supported and not just refuted/criticised.

The attack on "justificationism" is actually a problem for Popperiansim, since a classic refutation is a single observation such as a Black Swan. However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?

However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?

Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white". You still have a choice -- you can see the black swan and reject "all swans are white", or you can quibble with the evidence in a large number of ways which I'm sure you know of too and keep on believing "all swans are white". People really do that; searching Google for "Rapture schedule" will pull up a prominent and current example.

Why not just phrase it in terms of utility? "Justification" can mean too many different things.

Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.

Putting it in terms of beliefs paying rent in anticipated experiences, the belief "all swans are white" told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn't as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong - that is, cause me to lose.

So can't this all be better phrased in more established LW terms?

I think you've just reinvented pragmatism.

ETA: Ugh, that Wikipedia page is remarkably uninformative... anyone have a better link?

Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white".

Fine. If criticism is just a loose sort of refutation, then I'll invent something that is just a loose kind of inductive support, let's say schmitticism, and then I'll claim that every time I see a white swan, that schmitticises the claim that all swans are white, and Popper can't say schmitticisim doesn't work because there are no particular well-defined standards or mechanisms of schmitticism for his arguments to latch onto.

Assuming this post doesn't quickly get negative karma, a reasonable next step would be to put a list of criticisms of beliefs, preferences, and behaviors on a not-yet-created LessWrong pancritical rationalism Wiki page.

A reasonable next step... after what? Towards what end? You're using the language of travel, but never made explicit a destination, or a motivation for it. A "list of things than can go wrong" by itself sounds not very interesting - unless you mean something like the lists of cognitive biases?

[-][anonymous]13y00

A reasonable next step toward having a coherent explanation of pancritical rationalism would be to put it on the web site.

Having a known list of criticisms of preferences and behaviors would be, IMO, a reasonable next step to getting clarity on what we prefer and how we want to behave. As things stand now we seem to keep rediscovering the relevant criticisms each time something new comes up.

A reasonable next step toward having a coherent explanation of pancritical rationalism would be to put it on the web site.

Having a known list of criticisms of preferences and behaviors would be, IMO, a reasonable next step to getting clarity on what we prefer and how we want to behave. As things stand now we seem to keep rediscovering the relevant criticisms each time something new comes up.

Also, this seems to have similar infinite regress problems. Suppose you have to take into account a couple of criticisms of some theory. How do you evaluate them? Well, you look at criticisms of them.

And criticisms of the criticisms of the criticisms.

And so on.

Yes. That is unavoidable. If you're looking for a justification, you didn't find it. If you're looking for a reasonable basis for making a decision that must ultimately be subjective, the criticisms help.

Your criticism of preferences seem to be in terms of preferences. Nobody would be able to apply them to themselves, because they would not make sense if your preferences are already different. For example, it doesn't make sense to say that a preference should be discounted because it doesn't value your life, since not valuing your life is the subjectively right thing to do if you prefer it. The exceptions I noticed in your list are "maybe it's not actually your preference" and "maybe it conflicts with another of your preferences."

On behavior, we already have ways of getting behavior from beliefs and preferences - a consistent pattern of behavior is equivalent to holding certain axioms and/or wanting certain results - for example, rational behavior is preference-maximizing. To ignore this powerful tool and fall back on subjective criticism seems like a bad choice.

Your criticism of preferences seem to be in terms of preferences. Nobody would be able to apply them to themselves, because they would not make sense if your preferences are already different.

I agree, if we could start the process with the subject's true preferences, and the subject were rational. Instead it seems we have to start with the results from introspection, which might be wrong. I'm trying to understand what to do about that. I think people should take the possibility of incorrect introspection seriously.

On behavior, we already have ways of getting behavior from beliefs and preferences - a consistent pattern of behavior is equivalent to holding certain axioms and/or wanting certain results - for example, rational behavior is preference-maximizing. To ignore this powerful tool and fall back on subjective criticism seems like a bad choice.

I agree with you there. Several of the criticisms of behavior I listed were about behavior not matching stated or inferred preferences, and perhaps in principle that's all we need, just as the criticisms of belief can be simplified in principle down to Bayes' rule and a prior. In practice, people sometimes do a poor job of enacting their preferences, and IMO subjective criticism helps there.

I agree, if we could start the process with the subject's true preferences, and the subject were rational. Instead it seems we have to start with the results from introspection, which might be wrong. I'm trying to understand what to do about that. I think people should take the possibility of incorrect introspection seriously.

Then you're just dealing with beliefs about preferences, which are a kind of beliefs, so this reduces to PCR for beliefs.

Then you're just dealing with beliefs about preferences, which are a kind of beliefs, so this reduces to PCR for beliefs.

You're right there. And PCR for beliefs is trivial in principle, just use Bayes' rule and the Universal Prior based on the programming language of your choice. Nobody seems to be good enough at actually evaluating that prior to care much about which programming language you use to represent the hypotheses yet.

So if someone introspects and says they will make choices as though they have unbounded utility, and the math makes it seem impossible for them to really do that, then I can reply "I don't believe you" and move on, just as though they had professed believing in an invisible dragon in their garage.

That's a really simple solution to get rid of a large pile of garbage, contingent on the math working out right. Thanks. I'll pay more attention to the math.

ETA: I edited the OP to point to this comment. This was an excellent outcome from the conversation, by the way. LessWrong works.

(obligatory xkcd reference)

LessWrong: It works, bitches.

In all cases, if you're doing or preferring or believing something that has a valid criticism, the response does not necessarily have to be "don't do/prefer/believe that". The response might be "In light of the alternatives I know about and the criticisms of all available alternatives, I accept that".

Of course, another response might be "I don't have time to consider any of that right now", but in that case you are at a level of urgency where this article won't be directly useful to you. You'll have to get yourself straightened out when things are less urgent and make use of that preparation when things are urgent.

A third response: I already have a process for correcting my beliefs, i.e. applying the things I learnt on LessWrong, and am not particularly interested in learning a whole new school of thought that has it's own vocabulary and may or may not be isomorphic to what I'm already doing.

There are probably worthwhile things to learn in Pancritical Rationalism, but I'd much prefer if it was presented by comparing to what we've already talked about on LessWrong (what is different, what is new, what is the same thing but phrased a bit differently), rather than leaving it to us to figure out which part of this map to what we're already doing and how.

A third response: I already have a process for correcting my beliefs, i.e. applying the things I learnt on LessWrong, and am not particularly interested in learning a whole new school of thought that has it's own vocabulary and may or may not be isomorphic to what I'm already doing.

Either you're missing my point, or there's something good I didn't read on LessWrong.

What, if anything, did LessWrong teach you about how to examine your preferences and your behavior?

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

The title of the article is "Pancritical rationalism can apply to preferences and behavior" and then people make comments talking solely about beliefs, thus ignoring my main point. I'd like to interpret that as an indication that I should make specific improvements in the article, but right now I don' t know what improvement should be made. I am open to advice.

Once you accept the idea that beliefs can be criticized, it's a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:

does not seem to support your claim as to what your main point is. So perhaps we were justified in interpreting it that way?

In general you seem to argue for pancritical rationalism with regards to beliefs, but for preferences and behaviors merely provide a list of criticisms. One can't argue with a list of possible criticisms - I didn't look at every single one but I'm sure they all are, in fact, possible criticisms. We believe that pancritical rationalism is not a useful concept - something that is most clearly argued in pancritical rationalism's home turf, belief.

If you have an argument why Pancritical rationalism applies better in preferences and behaviors than beliefs, I'm all ears. I have not seen such an argument though.

If you have an argument why Pancritical rationalism applies better in preferences and behaviors than beliefs, I'm all ears.

We start with a preference or a belief or a behavior (or something else), so we never have a choice between doing pancriticial rationalism with a preference or doing pancritcal rationalism with a belief. Comparing the two is therefore not relevant. What is relevant is whether pancritical rationalism with preferences is worthwhile.

Pancritical rationalism is nontrivial for preferences because we presently have multiple possible criticisms, and none of them conclusively prove that something is wrong with the preference. So the choices I can see are:

  • We could choose not to not talk about preferences at all. Preferences are important, so that's not good.

  • We could talk about preferences without understanding the nature of the conversation. The objective morality bullshit that has been argued a few times seems to be a special case of this. I wouldn't want to participate in that again.

  • We can do pancritical rationalism with preferences.

I would really like a better alternative, but I do not see one.

For beliefs and behaviors, I agree at this point that PCR doesn't give much leverage. We can trivialize PCR for beliefs down to Bayes' rule and choosing a prior. We can trivialize PCR for behaviors down to the rule of choosing the behavior that you believe will best give you your preferences. If you don't want to assume rationality and unbounded computational resources, there might be more criticisms of belief and behavior that are worthwhile, but it's a small win at best and probably not worth talking about given that people don't seem to be getting the main point.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.

[-][anonymous]13y00

It seems like this is just a special case of Eliezer's answer to self-reference, which is just to do the best you can to believe true things, according to the beliefs you have, until you have nothing next to do, at which point you stop and hope that it is correct.

which is in some ways nothing and in some ways everything.