You'll find the whole thing pretty interesting, although it concerns decision theory more than the rationality of belief, although these are deeply connected (the connection is an interesting topic for speculation in itself). Here's a brief summary of the book. I'm pretty partial to it.
Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006).
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in "What should I do if I were an ideal agent?", but rather, "What should I do given that I am who I am, with all my actual cognitive limitations?"
The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most common view among philosophers is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making.
Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory.
Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of "expected utility" is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. An alternative, called "locally global planning", is proposed. According to locally global planning, individual plans are to be assessed in terms of their contribution to the cognizer's "master plan". Again, the objective cannot be to find master plans with maximal expected utilities, because there may be none, and even if they are, finding them is not a computationally feasible task for real agents. Instead, the objective must be to find good master plans, and improve them as better ones come along. It is argued that there are computationally feasible ways of doing this, based on defeasible reasoning about values and probabilities.
Should be 'a serious challenge'
There's also an assumption that ideal rationality is coherent (and even rational) for bounded agents like ourselves. Probability theorist and epistemologist John Pollock has launched a series challenge to this model of decision making in his recent 06 book Thinking About Acting.
Eliezer said: "If Overcoming Bias has any religious readers left, I say to you: it may be that you will someday lose your faith: and on that day, you will not lose all sense of moral direction."
He's addressing all religious people here, right? I responded to this comment as a theistic philosopher.
Further, specifically to Eliezer, I consider myself a religious fundamentalist (many Christian philosophers do), so I took him to be addressing me on that score as well. I guess I don't know what you mean by it. Plantinga suggests that most people who use the term mean something like, "Sum'bitch." I take it you mean something more.
I think that most theists may be divine command theorists, but I'm not sure most theists have thought about it. Of theists who've thought about it, it's hard to say.
I do think, however, that most serious Christians do not think that the primary reason to obey God is to secure reward or avoid punishment. I do think that's a caricature. Yes, televangelists use that term and a number of rural preachers, but in my own experience and the experience of many others I know, we're primarily exhorted to obey God because He loves us or He wants us to, etc. Christians, at least, know that God is love, and while some talk up hell, that is rarely their primary emphasis.
Overcoming Bias DOES have a religious reader left. Me. I'm a philosopher with strong interests in political philosophy and philosophy of religion. I had several problems with the post:
You say that if we lose our belief in God that we won't lose our moral compass altogether. But that isn't the only issue for the theist, it's also whether the moral compass will point in the right direction all the time. If I become an atheist, I might still believe that murder is wrong, but I won't believe that a respect for the sacred is particularly important, and I'll probably start to reject, say, traditional teachings about sexual morality. Theists might well be worried about that.
Further, I think it's silly to imply (as you appear to) that most theists are divine command theorists. Most theistic philosophers today are not and neither were most theistic philosophers historically. Many of us (theistic philosophers) think that natural reason can tell us what moral rules we should follow.
I should also say that most theologians and philosophers who are religious (I guess all theologians are religious, but I think I have some exceptions in mind.) don't think that the primary reason to do as God says is because of external reward or punishment. That's just a silly caricature. Most theistic philosophers think that communion with God is our summum bonum. It's the whole point of our existence - He is our final end - our eudaimonia. They think we're naturally motivated to seek God and that those who are not have been corrupted by sin and rebellion. Pascal once said that everyone's heart has a God-shaped hole. Most of us believe something like that.
Note that much of what you say in your posts is pretty offensive to religious believers. We're not a bunch of morons, you know. Please see these philosophers and read their work as counterevidence.
Eliezer,
You say: "if you can invent an equally persuasive explanation for any outcome, you have zero knowledge."
You'll want to read Quine on this. Quine thought that for nearly any sufficiently large data set there were an infinite number of theories that could accurately explain it. Now, granted, some theories are better than others, but many theories are harder to compare with others. Here are some examples:
Suppose you have three theoretical values: simplicity, coherence, and accommodation of the data. Different parts of a given scientific community may have distinct value rankings; they may consider some values more important than others. As a result, they end up gravitating towards different classes of theories.
Further, different scientists may start trying to explain different parts of the data than other scientists, leading their theorizing to be path-dependent. This may also change outcomes of belief in ways that are not rationally objectionable.
Even without the above two problems, theoretical ambiguities present themselves all the time in scientific and every day belief.
Given these considerations, I think that your statement above must be wrong. Certainly you can have a justified belief in one equally good explanation over another. And if that belief is true, you have knowledge (if you meet the completely inscrutable fourth condition; and nobody knows what it is.).
You seem to think that if two equally good explanations present themselves to us, the proper response is to claim suspend judgment, or at least take one judgment on board tentatively, making no knowledge claim. I'm not sure that's right. It seems like we could be justified in taking either one as justified in that case. And thus if we believe the proposition (we do), it's justified (it is), it's true (it might be), and it's X (don't ask me; ask Gettier), then you know it. Justification, to my mind, doesn't always select the one and only one best option. You seem to think it does, and so if there isn't one best explanation you don't have justification. Is that what you think?
Unless of course you've already piled through these matters. If so, then link me and I'll shut up. A cursory check yielded little.
Eliezer,
You say: "If you genuinely subject your conclusion to a criticism that can potentially de-conclude it - if the criticism genuinely has that power - then that does modify "the real algorithm behind" your conclusion."
Why do you think it's an epistemic duty to appeal to subject your views to criticisms that can potentially de-conclude it? Or do you think this? If you think it, do you think the duty is restricted? Or is it universal?
If you say that it's not a duty, then fine. But you seem to think it is. If you think that it's universal, you're going to undermine your normative beliefs, I think, including your beliefs about the normativity of probability theory. If you think it's restricted, then I think you're going to have a bit of a time figuring out a dividing line between the beliefs included and the beliefs excluded that isn't ad hoc. But you may be able to do so.
But go ahead, give it a shot. I'll be interested in seeing you slog through some epistemology, rather than merely pontificating about the glories of the Church of Universal Evidentialism. ;)
Eliezer, Thomas Scanlon discusses this issue in the 'Aggregation' Section of Chapter 5 of his What We Owe To Each Other. Philosophers have been on it for awhile.
I once spoke with David Schmidtz, a philosophy at the University of Arizona, about Scwartz's work. All he shows is that more choices makes people anxious and confused. But Dave told me that he got Scwartz to admit that being anxious and confused isn't the same way as having a net utility decrease. It's not even close.