A good nutshell description of the type of Bayesianism that many LWers think correct is objective Bayesianism with critical rationalism-like underpinnings. Where recursive justification hits bottom is particularly relevant. On my cursory skim, Albert only seems to be addressing "subjective" Bayesianism which allows for any choice of prior.
It seems to think the problem of the priors does in Bayesianism :-(
Popper seems outdated. Rejecting induction completely is not very realistic.
Would you agree that this is a bit condescending and you're basically assuming in advance that you know more than me?
I actually have read about it and disagree with it on purpose, not out of ignorance.
I apologise for this, but I really don't see how anyone could go through those studies without losing all faith in human intuition.
I don't suppose the proof is online anywhere (I can access major article databases), or that you could give it or an outline?
The text can be found online. My browser (Chrome) wouldn't open the files but you may have more luck.
BTW I wonder why the proof takes 2 chapters. Proofs are normally fairly short things. And, well, even if it was 100 pages of straight math I don't see why you'd break it into separate chapters.
Part of the reason for length is that probability theory has a number of axioms and he has to prove them all. The reason for the two chapter split is that the first chapter is about explaining what he wants to do, why he wants to do it, and laying out his desiderata. It also contains a few digressions in case the reader isn't familiar with one or more of the prerequisites for understanding it (propositional logic for example). All of the actual maths is in the second chapter.
No I understood that. And that is authoritarian in regard to your own thoughts.
I agree to the explicit meaning of this statement but you are sneaking in connotations. Let us look more closely about what 'authoritarian' means.
You probably mean it in the sense of centralised as opposed to decentralized control, and in that sense I will bite the bullet and say that thinking should be authoritarian.
However, the word has a number of negative connotations. Corruption, lack of respect for human rights and massive bureaucracy that stifles innovation to name a few. None of those apply to my thinking process, so even though the term may be technically correct it is somewhat intellectually dishonest to use it, something more value-neutral like 'centralized control' might be better.
Regarding Popper, you say you don't agree with the common criticisms of him. OK. Great. So, what are your criticisms? You didn't say.
I will confess that I am not familiar with the whole of Popper's viewpoint. I have never read anything written by him although after this conversation I am planning to.
Therefore I do not know whether or not I broadly agree or disagree with him. I did not come here to attack him, originally I was just responding to a criticism of yours that Bayesianism fails in a certain situation
To some extent I think the approach with conjectures and criticisms may be correct, at least as a description of how thinking must get off the ground. Can you be a Popperian and conjecture Bayesianism?
The point that I do disagree with is the proposed asymmetry between confirmation and falsification. In my view neither the black swan or the white swan proves anything with certainty, but both do provide some evidence. It happens in this case that one piece of evidence is very strong while the other is very weak, in fact they are pretty much at opposite extremes of the full spectrum of evidence encountered in the real world. This does not mean there is a difference of type.
If there was an epistemology that didn't endorse circular arguments, would you prefer it over yours which does?
All else being equal, yes. Other factors, such as real-world results might take precedence. I also doubt that any philosophy could manage without either circularity or assumptions, explicit or otherwise. As I see it when you start thinking you need something to begin your inference, logic derives truths form other truths, it cannot manufacture them out of a vacuum. So any philosophy has two choices:
Either, pick a few axioms, call them self evident and derive everything from them. This seems to work fairly well in pure maths, but not anywhere else. I suspect the difference lies in whether the axioms really are self evident or not.
Or, start out with some procedures for thinking. All claims are judged by these, including proposals to change the procedures for thinking. Thus the procedures may self-modify and will hopefully improve. This seems better to me, as long as the starting point passes a certain threshold of accuracy any errors are likely to get removed (the phrase used here is the Lens that Sees its Flaws). It is ultimately circular, since whatever the current procedures are they are justified only by themselves, but I can live with that.
Ideal Bayesians are of the former type, but they can afford to be as they are mathematically perfect beings who never make mistakes. Human Bayesians take the latter approach, which means in principle they might stop being Bayesians if they could see that for some reason it was wrong.
So I guess my answer is that if a position didn't endorse circular arguments, I would be very worried that it is going down the unquestionable axioms route, even if it does not do so explicitly, so I would probably not prefer it.
Notice how it is only through the benefits of the second approach that I can even consider such a scenario.
I agree to the explicit meaning of this statement but you are sneaking in connotations. Let us look more closely about what 'authoritarian' means.
I'm not trying to argue by connotation. It's hard to avoid connotations and I think the words I'm using are accurate.
You probably mean it in the sense of centralised as opposed to decentralized control, and in that sense I will bite the bullet and say that thinking should be authoritarian.
That's not what I had in mind, but I do think that centralized control is a mistake.
I take fallibilism seriously: any i...
I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Any thoughts?