Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person. To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.
Consider the problem of Occam's Razor, as confronted by Traditional philosophers. If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?
You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor. "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.
You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?
Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor. (What's special about the words I italicized?)
If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".
But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs. If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.
James R. Newman said: "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2." The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains. Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.
When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain. It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done. You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.
If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2. If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.
There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?
This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row. You and the apple exist within a boundary-less unified physical process, and one part may echo another.
Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary? Many AI algorithms function better with "regularization" that biases the solution space toward simpler solutions. But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1000 extra lines) compared to unregularized algorithms. The human brain is biased toward simplicity, and we think more efficiently thereby. If you press the Ignore button at this point, you're left with a complex brain that exists for no reason and works for no reason. So don't try to tell me that "a priori" beliefs are arbitrary, because they sure aren't generated by rolling random numbers. (What does the adjective "arbitrary" mean, anyway?)
You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There's no truce, no white flag, until you understand why the engine works.
If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"
Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B". How do you justify Modus Ponens to a mind that hasn't accepted it? How do you argue a rock into becoming a mind?
Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness. This does not make our judgments meaningless. A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence. But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.
Richard, I would like to know what you mean by "conceptually possible" and why you think conceptual possibility has anything to do with actual possibility. I think you mean something like "I can/can't imagine X without any obvious inconsistencies". So, e.g., you can imagine, or think you can imagine, a world physically identical to ours in which people have no experiences; but you can't imagine, or think you can't imagine, a world physically identical to ours in which jumbo jets don't fly.
But whether something is "conceptually possible" in this sort of sense obviously has as much to do with the limits of our understanding as with what's actually possible, no?
1. Consider some notorious open problem in pure mathematics; the Riemann hypothesis, say. I can, in some sense, "imagine" a world in which RH is true and a world in which RH is false; I can tell you about some of the consequences in each case; but, despite that, one of those worlds is logically impossible; we just don't know which. (I'm ignoring, because I'm too lazy to think it through now, the possibility that RH might be undecidable.) So something can be "conceptually possible" despite being logically impossible and hence (if you believe in possible worlds) false in all possible worlds.
2. I cannot, so far as I can tell, imagine what it would be like if the world had two "timelike" dimensions and two "spacelike" ones rather than 1 and 3. (Perhaps if I sat down and concentrated for a while I could; in which case, make it twenty of each, or something.) I can calculate some of the consequences, I suppose, but I can form no coherent mental picture. None the less, it seems clear that such a world is possible in principle. So something can be (for a given person, at least) "conceptually impossible" despite being possible in other senses.
Examples like these make it seem obvious to me that "conceptual possibility" tells us much more about the limits of our imagination and reasoning than it does about the nature of reality.
You can't imagine a world physically like ours in which jumbo jets don't fly; that would be because flying is simple enough that we have some a pretty good understanding of how it works, and what mechanisms underlie it. Of course we don't have any similarly good understanding of how minds work. It seems to me that that's the only difference here. Lack of understanding is not evidence of magic.
(Suppose I claim that I can so imagine a world physically identical to ours in which boeing-arranged atoms at 10k feet aren't flying airplanes; they're, er, zairplanes; they are doing something physically indistinguishable from flying, but of course it isn't really flying. Those who fail to see the difference just lack sufficient subtlety of thought. Ridiculous, no?)
Anyway, let's suppose it's "conceptually possible" that the world should be exactly as it is, physically, but with no consciousness anywhere to be found. So what? All that means is that someone can form some sort of mental picture of what such a world might be like. I don't see how to eliminate the possibility that filling in the details might ultimately lead to a contradiction (as with either RH or not-RH). Or that digging further into the notion of "phenomenal consciousness" being used might reveal that it has no real content and serves only to obfuscate. (I strongly suspect that this is in fact the case. Of course that doesn't mean that those who appeal such notions have any intention to obfuscate.)
For what it's worth, I'm pretty sure that a zombie world is not conceptually possible to me: I can only "imagine" such a world by deliberately not thinking too hard about the details.