A good nutshell description of the type of Bayesianism that many LWers think correct is objective Bayesianism with critical rationalism-like underpinnings. Where recursive justification hits bottom is particularly relevant. On my cursory skim, Albert only seems to be addressing "subjective" Bayesianism which allows for any choice of prior.
It seems to think the problem of the priors does in Bayesianism :-(
Popper seems outdated. Rejecting induction completely is not very realistic.
Did you read my link? Where did the argument about approximately autonomous ideas go wrong?
I did. To see what is wrong with it let me give an analogy. Cars have both engines and tyres. It is possible to replace the tyres without replacing the engine. Thus you will find many cars with very different tyres but identical engines, and many different engines but identical tyres. This does not mean that tyres are autonomous and would work fine without engines.
Well this changes the topic. But OK. How do you decide what has support? What is support and how does it differ from consistency?
Well, mathematical proofs are support, and they are not at all the same a consistency. In general however, if some random idea pops into my head, and I spot that it in fact it only occurred to me as a result of conjunction bias I am not going to say "well, it would be unfair of me to reject this just because it contradicts probability theory, so I must reject both it and probability theory until I can find a superior compromise position". Frankly, that would be stupid.
@autonomous -- you know we said "approximately autonomous" right? And that, for various purposes, tires are approximately autonomous, which means things like they can be replaced individually without touching the engine or knowing what type of engine it is. And a tire could be taken off one car and put on another.
No one was saying it'd function in isolation. Just like a person being autonomous doesn't mean they would do well in isolation (e.g. in deep space). Just because people do need to be in appropriate environments to function doesn't make &...
I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Any thoughts?