A good nutshell description of the type of Bayesianism that many LWers think correct is objective Bayesianism with critical rationalism-like underpinnings. Where recursive justification hits bottom is particularly relevant. On my cursory skim, Albert only seems to be addressing "subjective" Bayesianism which allows for any choice of prior.
It seems to think the problem of the priors does in Bayesianism :-(
Popper seems outdated. Rejecting induction completely is not very realistic.
They argue notionally. They are roughly autonomous, they have different substance/assertions/content, sometimes their content contradicts, and when you have two or more conflicting ideas you have to deal with that. You (sometimes) approach the conflict by what we might call an internal argument/debate. You think of arguments for all the sides (the substance/content of the conflict ideas), you try to think of a way to resolve the debate by figuring out the best answer, you criticize what you think may be mistakes in any of the ideas, you reject ideas you decide are mistaken, you assign probabilities to stuff and do math, perhaps, etc...
When things go well, you reach a conclusion you deem to be an improvement. It resolves the issue. Each of the ideas which is improved on notionally acknowledges this new idea is better, rather than still conflicting. For example, if one idea was to get pizza, and one was to get sushi, and both had the supporting idea that you can't get both because it would cost too much, or take too long, or make you fat, then you could resolve the issue by figuring out how to do it quickly, cheaply and without getting fat (smaller portions). If you came up with a new idea that does all that, none of the previously conflicting ideas would have any criticism of it, no objection to it. The conflict is resolved.
Sometimes we don't come up with a solution that resolves all the issues cleanly. This can be due to not trying, or because it's hard, or whatever.
Then what?
Big topic, but what not to do is use force: arbitrarily decide which side wins (often based on some kind of authority or justification), and declare it the winner even though the substance of the other side is not addressed. Don't force some of your ideas, which have substantive unaddressed points, to defer to the ideas you put in charge (granted authority).
Big topic, but what not to do is use force: arbitrarily decide which side wins (often based on some kind of authority or justification), and declare it the winner even though the substance of the other side is not addressed. Don't force some of your ideas, which have substantive unaddressed points, to defer to the ideas you put in charge (granted authority).
I certainly don't advocate deciding arbitrarily. The would fall into the fallacy of just making sh*t up which is the exact of everything Bayes stands for. However, I don't have to be arbitrary, most ...
I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Any thoughts?