Atheists trying to justify themselves often find themselves asked to replace religion. “If there’s no God, what’s your system of morality?” “How did the Universe begin?” “How do you explain the existence of eyes?” “How do you find meaning in life?” And the poor atheist, after one question too many, is forced to say “I don’t know.” After all, he’s not a philosopher, cosmologist, psychologist, and evolutionary biologist rolled into one. And even they don’t have all the answers.
But the atheist, if he retains his composure, can say, “I don’t know, but so what? There’s still something that doesn’t make sense about what you learned in Sunday school. There’s still something wrong with your religion. The fact that I don’t know everything won’t make the problem go away.”
What I want to emphasize here, even though it may be elementary, is that it can be valuable and accurate to say something’s wrong even when you don’t have a full solution or a replacement.
Consider political radicals. Marxists, libertarians, anarchists, greens, John Birchers. Radicals are diverse in their political theories, but they have one critical commonality: they think something’s wrong with the status quo. And that means, in practice, that different kinds of radicals sometimes sound similar, because they’re the ones who criticize the current practices of the current government and society. And it’s in criticizing that radicals make the strongest arguments, I think. They’re sketchy and vague in designing their utopias, but they have moral and evidentiary force when they say that something’s wrong with the criminal justice system, something’s wrong with the economy, something’s wrong with the legislative process.
Moderates, who are invested in the status quo, tend to simply not notice problems, and to dismiss radicals for not having well-thought-out solutions. But it’s better to know that a problem exists than to not know – regardless of whether you have a solution at the moment.
Most people, confronted with a problem they can’t solve, say “We just have to live with it,” and very rapidly gloss into “It’s not really a problem.” Aging is often painful and debilitating and ends in death. Almost everyone has decided it’s not really a problem – simply because it has no known solution. But we also used to think that senile dementia and toothlessness were “just part of getting old.” I would venture that the tendency, over time, to find life’s cruelties less tolerable and to want to cure more of them, is the most positive feature of civilization. To do that, we need people who strenuously object to what everyone else approaches with resignation.
Theodore Roosevelt wrote, “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better.”
But it is the critic who counts. Just because I can’t solve P=NP doesn’t mean I can’t say the latest attempt at a proof is flawed. Just because I don’t have a comprehensive system of ethics doesn’t mean there’s not something wrong with the Bible’s. Just because I don’t have a plan for a perfect government doesn’t mean there isn’t something wrong with the present one. Just because I can’t make people live longer and healthier lives doesn’t mean that aging isn’t a problem. Just because nobody knows how to end poverty doesn’t mean poverty is okay. We are further from finding solutions if we dismiss the very existence of the problems.
This is why I’m basically sympathetic to speculations about existential risk, and also to various kinds of research associated with aging and mortality. It’s calling attention to unsolved problems. There’s a human bias against acknowledging the existence of problems for which we don’t have solutions; we need incentives in the other direction, encouraging people to identify hard problems. In mathematics, we value a good conjecture or open problem, even if the proof doesn’t come along for decades. This would be a good norm to adopt more broadly – value the critic, value the one who observes a flaw, notices a hard problem, or protests an outrage, even if he doesn’t come with a solution. Fight the urge to accept a bad solution just because it ties up the loose ends.
I was thinking of "worldview" as a system of axioms against which claims are tested. For example, a religious worldview might axiomatically state that God exists and created the universe, and so any claim which violated that axiom can be discarded out of hand.
I'm realizing now that that's not a useful definition; I was using it as shorthand for "beliefs that other people hold that aren't updatable, unlike of course my beliefs which are totally rational because mumble mumble and the third step is profit".
Beliefs which cannot be updated aren't useful, but not all beliefs which might reasonably form a "worldview" are un-Bayesian. Maybe a better way to talk about worldviews is to think about beliefs which are highly depended upon; beliefs which, if they were updated, would also cause huge re-updates of lots of beliefs farther down the dependency graph. That would include both religious beliefs and the general belief in rationality, and include both un-updateable axiomatic beliefs as well as beliefs that are rationally resistant to update because a large collection of evidence already supports them.
So, I withdraw what I said earlier. Meshing with a worldview can in fact be rational support for a hypothesis, provided the worldview itself consists of rationally supported beliefs.
Okay, with that in mind:
My claim that Miller-Urey is support for the hypothesis of life naturally occurring on Earth was based on the following beliefs:
The scientific research of others is good evidence even if I don't understand the research itself, particularly when it is highly cited
The Miller-Urey experiment demonstrated that amino acids could plausibly form in early Earth conditions
Given sufficient opportunities, these amino acids could form a self-replicating pseudo-organism, from which evolution could be bootstrapped
Based on what you've explained I have significantly reduced my confidence in #3. My initial confidence for #3 was too high; it was based on hearing lots of talk about Miller-Urey amino acids being the building blocks of life, when I had not actually heard of specific paths for such formation that are confidently accepted by experts in the field as plausible.
Okay, so my conclusion has been adjusted (thanks!), but to bring it back to the earlier point: what about worldviews? Of the above, I think only #1 could be said to have to do with worldviews, and I still think it's reasonable. As with your stereo amplifier example, even though I may not know enough about a subject to understand the literature myself, I can still estimate fairly well whether people who do claim to know enough about it are being scientific or pseudo-scientific, based on testability and lack of obviously fallacious reasoning.
Mis-application of that principle led me to my mistake with #3, but I think the principle itself stands.
Yes.
Beliefs have hierarchy, and some are more top-level than others. One of the most top-level beliefs being:
If you... (read more)