Why do I believe that the Sun will rise tomorrow?
Because I've seen the Sun rise on thousands of previous days.
Ah... but why do I believe the future will be like the past?
Even if I go past the mere surface observation of the Sun rising, to the apparently universal and exceptionless laws of gravitation and nuclear physics, then I am still left with the question: "Why do I believe this will also be true tomorrow?"
I could appeal to Occam's Razor, the principle of using the simplest theory that fits the facts... but why believe in Occam's Razor? Because it's been successful on past problems? But who says that this means Occam's Razor will work tomorrow?
And lo, the one said:
"Science also depends on unjustified assumptions. Thus science is ultimately based on faith, so don't you criticize me for believing in [silly-belief-#238721]."
As I've previously observed:
It's a most peculiar psychology—this business of "Science is based on faith too, so there!" Typically this is said by people who claim that faith is a good thing. Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?
Arguing that you should be immune to criticism is rarely a good sign.
But this doesn't answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
And if you're allowed to end in something assumed-without-justification, then why aren't you allowed to assume anything without justification?
A similar critique is sometimes leveled against Bayesianism—that it requires assuming some prior—by people who apparently think that the problem of induction is a particular problem of Bayesianism, which you can avoid by using classical statistics. I will speak of this later, perhaps.
But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.
Suppose you're drawing red and white balls from an urn. You observe that, of the first 9 balls, 3 are red and 6 are white. What is the probability that the next ball drawn will be red?
That depends on your prior beliefs about the urn. If you think the urn-maker generated a uniform random number between 0 and 1, and used that number as the fixed probability of each ball being red, then the answer is 4/11 (by Laplace's Law of Succession). If you think the urn originally contained 10 red balls and 10 white balls, then the answer is 7/11.
Which goes to say that, with the right prior—or rather the wrong prior—the chance of the Sun rising tomorrow, would seem to go down with each succeeding day... if you were absolutely certain, a priori, that there was a great barrel out there from which, on each day, there was drawn a little slip of paper that determined whether the Sun rose or not; and that the barrel contained only a limited number of slips saying "Yes", and the slips were drawn without replacement.
There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.
And when you ask these strange beings why they keep using priors that never seem to work in real life... they reply, "Because it's never worked for us before!"
Now, one lesson you might derive from this, is "Don't be born with a stupid prior." This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
Here's how I treat this problem myself: I try to approach questions like "Should I trust my brain?" or "Should I trust Occam's Razor?" as though they were nothing special— or at least, nothing special as deep questions go.
Should I trust Occam's Razor? Well, how well does (any particular version of) Occam's Razor seem to work in practice? What kind of probability-theoretic justifications can I find for it? When I look at the universe, does it seem like the kind of universe in which Occam's Razor would work well?
Should I trust my brain? Obviously not; it doesn't always work. But nonetheless, the human brain seems much more powerful than the most sophisticated computer programs I could consider trusting otherwise. How well does my brain work in practice, on which sorts of problems?
When I examine the causal history of my brain—its origins in natural selection—I find, on the one hand, all sorts of specific reasons for doubt; my brain was optimized to run on the ancestral savanna, not to do math. But on the other hand, it's also clear why, loosely speaking, it's possible that the brain really could work. Natural selection would have quickly eliminated brains so completely unsuited to reasoning, so anti-helpful, as anti-Occamian or anti-Laplacian priors.
So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I'm not halting the chain of examination at the point that I encounter Occam's Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?
Indeed, no matter what I did with this dilemma, it would be me doing it. Even if I trusted something else, like some computer program, it would be my own decision to trust it.
The technique of rejecting beliefs that have absolutely no justification, is in general an extremely important one. I sometimes say that the fundamental question of rationality is "Why do you believe what you believe?" I don't even want to say something that sounds like it might allow a single exception to the rule that everything needs justification.
Which is, itself, a dangerous sort of motivation; you can't always avoid everything that might be risky, and when someone annoys you by saying something silly, you can't reverse that stupidity to arrive at intelligence.
But I would nonetheless emphasize the difference between saying:
"Here is this assumption I cannot justify, which must be simply taken, and not further examined."
Versus saying:
"Here the inquiry continues to examine this assumption, with the full force of my present intelligence—as opposed to the full force of something else, like a random number generator or a magic 8-ball—even though my present intelligence happens to be founded on this assumption."
Still... wouldn't it be nice if we could examine the problem of how much to trust our brains without using our current intelligence? Wouldn't it be nice if we could examine the problem of how to think, without using our current grasp of rationality?
When you phrase it that way, it starts looking like the answer might be "No".
E. T. Jaynes used to say that you must always use all the information available to you—he was a Bayesian probability theorist, and had to clean up the paradoxes other people generated when they used different information at different points in their calculations. The principle of "Always put forth your true best effort" has at least as much appeal as "Never do anything that might look circular." After all, the alternative to putting forth your best effort is presumably doing less than your best.
But still... wouldn't it be nice if there were some way to justify using Occam's Razor, or justify predicting that the future will resemble the past, without assuming that those methods of reasoning which have worked on previous occasions are better than those which have continually failed?
Wouldn't it be nice if there were some chain of justifications that neither ended in an unexaminable assumption, nor was forced to examine itself under its own rules, but, instead, could be explained starting from absolute scratch to an ideal philosophy student of perfect emptiness?
Well, I'd certainly be interested, but I don't expect to see it done any time soon. I've argued elsewhere in several places against the idea that you can have a perfectly empty ghost-in-the-machine; there is no argument that you can explain to a rock.
Even if someone cracks the First Cause problem and comes up with the actual reason the universe is simple, which does not itself presume a simple universe... then I would still expect that the explanation could only be understood by a mindful listener, and not by, say, a rock. A listener that didn't start out already implementing modus ponens might be out of luck.
So, at the end of the day, what happens when someone keeps asking me "Why do you believe what you believe?"
At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct."
But then... haven't I just licensed circular logic?
Actually, I've just licensed reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else.
Reflection of this sort is, indeed, the reason we reject most circular logic in the first place. We want to have a coherent causal story about how our mind comes to know something, a story that explains how the process we used to arrive at our beliefs, is itself trustworthy. This is the essential demand behind the rationalist's fundamental question, "Why do you believe what you believe?"
Now suppose you write on a sheet of paper: "(1) Everything on this sheet of paper is true, (2) The mass of a helium atom is 20 grams." If that trick actually worked in real life, you would be able to know the true mass of a helium atom just by believing some circular logic which asserted it. Which would enable you to arrive at a true map of the universe sitting in your living room with the blinds drawn. Which would violate the second law of thermodynamics by generating information from nowhere. Which would not be a plausible story about how your mind could end up believing something true.
Even if you started out believing the sheet of paper, it would not seem that you had any reason for why the paper corresponded to reality. It would just be a miraculous coincidence that (a) the mass of a helium atom was 20 grams, and (b) the paper happened to say so.
Believing, in general, self-validating statement sets, does not seem like it should work to map external reality—when we reflect on it as a causal story about minds—using, of course, our current minds to do so.
But what about evolving to give more credence to simpler beliefs, and to believe that algorithms which have worked in the past are more likely to work in the future? Even when we reflect on this as a causal story of the origin of minds, it still seems like this could plausibly work to map reality.
And what about trusting reflective coherence in general? Wouldn't most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect? Ah, but we evolved by natural selection; we were not generated randomly.
If trusting this argument seems worrisome to you, then forget about the problem of philosophical justifications, and ask yourself whether it's really truly true.
(You will, of course, use your own mind to do so.)
Is this the same as the one who says, "I believe that the Bible is the word of God, because the Bible says so"?
Couldn't they argue that their blind faith must also have been placed in them by God, and is therefore trustworthy?
In point of fact, when religious people finally come to reject the Bible, they do not do so by magically jumping to a non-religious state of pure emptiness, and then evaluating their religious beliefs in that non-religious state of mind, and then jumping back to a new state with their religious beliefs removed.
People go from being religious, to being non-religious, because even in a religious state of mind, doubt seeps in. They notice their prayers (and worse, the prayers of seemingly much worthier people) are not being answered. They notice that God, who speaks to them in their heart in order to provide seemingly consoling answers about the universe, is not able to tell them the hundredth digit of pi (which would be a lot more reassuring, if God's purpose were reassurance). They examine the story of God's creation of the world and damnation of unbelievers, and it doesn't seem to make sense even under their own religious premises.
Being religious doesn't make you less than human. Your brain still has the abilities of a human brain. The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself. People don't heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch. They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.
This is why it's important to distinguish between reflecting on your mind using your mind (it's not like you can use anything else) and having an unquestionable assumption that you can't reflect on.
"I believe that the Bible is the word of God, because the Bible says so." Well, if the Bible were an astoundingly reliable source of information about all other matters, if it had not said that grasshoppers had four legs or that the universe was created in six days, but had instead contained the Periodic Table of Elements centuries before chemistry—if the Bible had served us only well and told us only truth—then we might, in fact, be inclined to take seriously the additional statement in the Bible, that the Bible had been generated by God. We might not trust it entirely, because it could also be aliens or the Dark Lords of the Matrix, but it would at least be worth taking seriously.
Likewise, if everything else that priests had told us, turned out to be true, we might take more seriously their statement that faith had been placed in us by God and was a systematically trustworthy source—especially if people could divine the hundredth digit of pi by faith as well.
So the important part of appreciating the circularity of "I believe that the Bible is the word of God, because the Bible says so," is not so much that you are going to reject the idea of reflecting on your mind using your current mind. But, rather, that you realize that anything which calls into question the Bible's trustworthiness, also calls into question the Bible's assurance of its trustworthiness.
This applies to rationality too: if the future should cease to resemble the past—even on its lowest and simplest and most stable observed levels of organization—well, mostly, I'd be dead, because my brain's processes require a lawful universe where chemistry goes on working. But if somehow I survived, then I would have to start questioning the principle that the future should be predicted to be like the past.
But for now... what's the alternative to saying, "I'm going to believe that the future will be like the past on the most stable level of organization I can identify, because that's previously worked better for me than any other algorithm I've tried"?
Is it saying, "I'm going to believe that the future will not be like the past, because that algorithm has always failed before"?
At this point I feel obliged to drag up the point that rationalists are not out to win arguments with ideal philosophers of perfect emptiness; we are simply out to win. For which purpose we want to get as close to the truth as we can possibly manage. So at the end of the day, I embrace the principle: "Question your brain, question your intuitions, question your principles of rationality, using the full current force of your mind, and doing the best you can do at every point."
If one of your current principles does come up wanting—according to your own mind's examination, since you can't step outside yourself—then change it! And then go back and look at things again, using your new improved principles.
The point is not to be reflectively consistent. The point is to win. But if you look at yourself and play to win, you are making yourself more reflectively consistent—that's what it means to "play to win" while "looking at yourself".
Everything, without exception, needs justification. Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops. I do think that reflective loops have a meta-character which should enable one to distinguish them, by common sense, from circular logics. But anyone seriously considering a circular logic in the first place, is probably out to lunch in matters of rationality; and will simply insist that their circular logic is a "reflective loop" even if it consists of a single scrap of paper saying "Trust me". Well, you can't always optimize your rationality techniques according to the sole consideration of preventing those bent on self-destruction from abusing them.
The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.
Always apply full force, whether it loops or not—do the best you can possibly do, whether it loops or not—and play, ultimately, to win.
To me, this is the point: "what's my alternative?"
A principle I got from a Stephen Donaldson novel applies to "the future will be like the past". The guy needed to find a bit of sabotage in a computer system. He had no expertise in software - or hardware, for that matter. But he needed to find the problem, or he would be dead.
The character got the principle he needed from bridge. In bridge, sometimes you're screwed unless your partner has the card you need him to have. So the play is to assume your partner has the card, and play accordingly, because if he doesn't, you're screwed anyway.
Assume you can win. Assume that everything necessary for you to win is true. If it isn't, you're screwed anyway.
If the future isn't like the past, how am I to know what ideas to rely on to take effective action? If I can't say "it worked before, so it will likely work tomorrow", it seems to me that I am screwed.
Should I believe that the future will be like the past, except on Tuesdays? Wednesdays? Except when it conflicts with statements in an arbitrarily selected book? From an arbitrarily selected person? From the first person I saw after I woke up 2343 days ago?
But if the future won't be like the past, I don't see any grounds for picking a solution, or the means for picking one. And even if one of these solutions works now, there wouldn't be any reason to think it would work later. In short, I'd be screwed. So I may as well believe the future will be like the past.
Assume winning is possible. I don't see how it is possible without the future being like the past, so I'm going to assume it will be.