You'll find the whole thing pretty interesting, although it concerns decision theory more than the rationality of belief, although these are deeply connected (the connection is an interesting topic for speculation in itself). Here's a brief summary of the book. I'm pretty partial to it.
Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006).
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in "What should I do if I ...
Should be 'a serious challenge'
There's also an assumption that ideal rationality is coherent (and even rational) for bounded agents like ourselves. Probability theorist and epistemologist John Pollock has launched a series challenge to this model of decision making in his recent 06 book Thinking About Acting.
Eliezer said: "If Overcoming Bias has any religious readers left, I say to you: it may be that you will someday lose your faith: and on that day, you will not lose all sense of moral direction."
He's addressing all religious people here, right? I responded to this comment as a theistic philosopher.
Further, specifically to Eliezer, I consider myself a religious fundamentalist (many Christian philosophers do), so I took him to be addressing me on that score as well. I guess I don't know what you mean by it. Plantinga suggests that most people who us...
Overcoming Bias DOES have a religious reader left. Me. I'm a philosopher with strong interests in political philosophy and philosophy of religion. I had several problems with the post:
You say that if we lose our belief in God that we won't lose our moral compass altogether. But that isn't the only issue for the theist, it's also whether the moral compass will point in the right direction all the time. If I become an atheist, I might still believe that murder is wrong, but I won't believe that a respect for the sacred is particularly important, and I'll pro...
Eliezer,
You say: "if you can invent an equally persuasive explanation for any outcome, you have zero knowledge."
You'll want to read Quine on this. Quine thought that for nearly any sufficiently large data set there were an infinite number of theories that could accurately explain it. Now, granted, some theories are better than others, but many theories are harder to compare with others. Here are some examples:
Suppose you have three theoretical values: simplicity, coherence, and accommodation of the data. Different parts of a given scientific comm...
Unless of course you've already piled through these matters. If so, then link me and I'll shut up. A cursory check yielded little.
Eliezer,
You say: "If you genuinely subject your conclusion to a criticism that can potentially de-conclude it - if the criticism genuinely has that power - then that does modify "the real algorithm behind" your conclusion."
Why do you think it's an epistemic duty to appeal to subject your views to criticisms that can potentially de-conclude it? Or do you think this? If you think it, do you think the duty is restricted? Or is it universal?
If you say that it's not a duty, then fine. But you seem to think it is. If you think that it's unive...
Eliezer, Thomas Scanlon discusses this issue in the 'Aggregation' Section of Chapter 5 of his What We Owe To Each Other. Philosophers have been on it for awhile.
G,
Welp, I've only been reading this blog for 2007. Silly me. I just read the post and all the comments. I have to say that Philip Bricker has the upper hand.
Bricker suggested the option that you advocate, by the way. But he dismisses it. Here's why, I think: If you suspend judgment in response to reasonable disagreement, you're going to have to suspend judgment about basically all philosophical theses. By doing so, you're going to run yourself into quite a few problems.
Note: By 'old-fashioned', I meant that the view advocated in the post relies on epistemological ideas that most epistemologists reject. I sure hope that has something to do with whether it's true. Although, maybe it doesn't.
I once spoke with David Schmidtz, a philosophy at the University of Arizona, about Scwartz's work. All he shows is that more choices makes people anxious and confused. But Dave told me that he got Scwartz to admit that being anxious and confused isn't the same way as having a net utility decrease. It's not even close.