I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Economists claim that principles of rationality are normative principles. Nevertheless,
they go on to explain why it is in a person’s own interest to be rational. If this were true,
being rational itself would be a means to an end, and rationality could be interpreted in
a non-normative or naturalistic way. The alternative is not attractive: if the only argument
in favor of principles of rationality were their intrinsic appeal, a commitment to
rationality would be irrational, making the notion of rationality self-defeating. A comprehensive
conception of rationality should recommend itself: it should be rational to be
rational. Moreover, since rational action requires rational beliefs concerning means-ends
relations, a naturalistic conception of rationality has to cover rational belief formation including
the belief that it is rational to be rational. The paper considers four conceptions
of rationality and asks whether they can deliver the goods: Bayesianism, perfect rationality
(just in case that it differs from Bayesianism), ecological rationality (as a version of
bounded rationality), and critical rationality, the conception of rationality characterizing
critical rationalism.
Any thoughts?
Consequentialism is not in the index.
Decision rule is, a little bit.
I don't think this book contains a proof mentioning consequentialism. Do you disagree? Give a page or section?
It looks like what they are doing is defining a decision rule in a special way. So, by definition, it has to be a mathematical thing to do with probability. Then after that, I'm sure it's rather easy to prove that you should use bayes' theorem rather than some other math.
But none of that is about decisions rules in the sense of methods human beings use for making decisions. It's just if you define them in a particular way -- so that Bayes' is basically the only option -- then you can prove it.
see e.g. page 19 where they give a definition. A Popperian approach to making decisions simply wouldn't fit within the scope of their definition, so the conclusion of any proof like you claimed existed (which i haven't found in this book) would not apply to Popperian ideas.
Maybe there is a lesson here about believing stuff is proven when you haven't seen the proof, listening to hearsay about what books contain, and trying to apply proofs you aren't familiar with (they often have limits on scope).
In what way would the Popperian approach fail to fit the decision rule approach on page 19 of Bickel and Doksum?