But you don't reject the hypothesis that "Barack Obama is a wooly mammoth" because it's absurd - nobody has seriously presented it. If someone had a reason to seriously present it, then I'd not dismiss it out of hand - if only because I was interested enough to hear it in the first place, so would want to see if the speaker was making a clever joke, or perhaps needed immediate medical care. As EY might say, noticing a hypothesis is unlikely enough in the first place that you should probably pay some attention to it, if the speaker was one of the people you listen to. cf. Einstein's Arrogance
Imagining that someone "had a reason to seriously present" to Obama-Mammoth hypothesis is to make the hypothesis non-absurd. If there is real evidence in favor of the hypothesis, than it is obviously worth considering. But that is just to fight the example; it doesn't tell us much about the actual line between absurd claims and claims that are worth considering.
In the world we actually inhabit, an individual who believed that they had good reasons to think that the president was an extinct quadruped would obviously be suffering from a thought disorder. It might be interesting to listen to such a person talk (or to hear a joke that begins with the O-M Hypo), but that doesn't mean that the claim is worth considering seriously.
I agree, and think that you explained it well, but I would personally go back to calling christianity absurd after looking into it and finding no evidence.
If you look, and find no evidence, what seperates christianity from "Barack Obama is a wooly mammoth"?
Christianity is false, but it is harder to falsify it then it is to show that Barrack Obama is not a non-sapient extinct mammal. I can prove the second false to a five-year-old of average intelligence by showing a picture of Obama and an artist's rendition of a mammoth. It would take some time to explain to the same five-year-old child why Christianity does not make sense as a description of the world.
This difference—that while both claims are false, one claim is much more obviously false than the other—explains why Christianity has many adherents but the Obama-Mammoth hypothesis does not. And we can usually infer from the fact that many people believe a proposition that it is not transparently false, making it more reasonable to investigate a bit before rejecting it.
A good reason to take this suggestion to heart: The terms "rationality" and "rational" have a strong positive value for most participants here—stronger, I think, than the value we attach to words like "truth-seeking" or "winning." This distorts discussion and argument; we push overhard to assert that things we like or advocate are "rational" in part because it feels good to associate our ideas with the pretty word.
If you particularize the conversation—i.e., you are likely to get more money by one-boxing on Newcomb's problem, or you are likely to hold more accurate beliefs if you update your probability estimates based solely on the disagreement of informed others—than it is less likely that you will grow overattached to particular procedures of analysis that you have previously given an attractive label.
Anything sufficiently beyond the bounds of what you've accepted as normal is 'absurd'. Rejecting a point, an argument, or a conclusion on the grounds that it's absurd is unreasonable. It is, in essence, refusing to consider the possibility of something on the grounds that you don't already affirm it.
Not necessarily. The vast majority of propositions are false. Most of them are obviously false; we don't need to spend much mental energy to reject the hypothesis that "Barack Obama is a wooly mammoth," or "the moon is made of butternut squash." "Absurd" is a useful label for statements that we can reject with minimal mental effort. And it makes sense that we refuse to consider most such statements; our mental time and energy is very limited, and if we want to live productive lives, we have to focus on things that have some decent probability of being true.
The problem is not denominating certain things as absurd, it is rejecting claims that we have reason to take more seriously. Both evolution and Christianity are believed by large enough communities that we should not reject either claim as "absurd." Rather, when many people believe something, we should attend to the best arguments in favor of those beliefs before we decide whether we disagree.
Our sense of justice is part of our morality, therefore we should not change it.
I have no premise "if something is part of our morality we shouldn't change it".
"We should seek justice" is tautological. If justice and optimal deterrence are contradictory, then we should not seek optimal deterrence.
No it isn't. See Thomblake's reply. I for one feel no particular attachement to justice over optimal deterrence. In fact, in many situations I actively give the latter precedence. You can keep your 'shoulds' while I go ahead and win my Risk games.
The fact that you do not value something does not serve very well as an argument for why others should stop valuing it. For those of us who do experience a conflict between a desire to deter and a desire to punish fairly, you have not explained why we should prioritize the first goal over the second when trying to reduce this conflict.
We have at least two goals when we punish: to prevent the commission of antisocial acts (by deterrence or incapacitation) and to express our anger at the breach of social norms. On what basis should we decide that the first type of goal takes priority over the second type, when the two conflict? You seem to assume that we are somehow mistaken when we punish more or less than deterrence requires; perhaps the better conclusion is that our desire to punish is more driven by retributive goals than it is by utilitarian ones, as Sunstein et al. suggest.
In other words, if two of our terminal values are conflicting, it is hard to see a principled basis for choosing which one to modify in order to reduce the conflict.
Weasel words, as you call them, are a necessary part of any rational discussion. The scientific equivalent would be, "evidence indicates" or "statistics show".
On this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence.
Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?
If you're reading this, Kurige, you should very quickly say the above out loud, so you can notice that it seems at least slightly harder to swallow - notice the subjective difference - before you go to the trouble of rerationalizing.
There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.
I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probability of my statement being accurate may be quite high, but my saying it is so does not make it so.
I chose to believe in the existance of God - deliberately and conciously. This decision, however, has absolutely zero effect on the actual existance of God.
Critical realism shows us that the world and our perception of the world are two different things. Ideally any rational thinker should have a close correlation between their perception of the world and reality, but outside of first-hand knowledge they are never equivalent.
You are correct - it is harder for me to say "God exists" than it is for me to say "I believe God exists" for the same reason it is harder for a scientist to say "the higgs-boson exists" than it is to say "according to our model, the higgs-boson should exist."
The scientist has evidence that such a particle exists, and may strongly believe in it's existence, but he does not have the authority to say definitively that it exists. It may exists, or not exist, independent of any such belief.
There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.
I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probability of my statement being accurate may be quite high, but my saying it is so does not make it so.
You do not cause yourself to like the color green merely by saying that you do. You are describing yourself, but the act of description does not make the description correct. You could speak falsely, but doing so would not change your preferences as to color.
There are some sentence-types that correspond to your concept of "authority." If I accept your offer to create a contract by saying, "we have a contract," I have in fact made is so by speaking. Likewise, "I now pronounce you man and wife." See J.L. Austin's "How to Do Things With Words" for more examples of this. The philosophy of language term for talking like this is that you are making a "performative utterance," because by speaking you are in fact performing an act, rather than merely describing the world.
But our speech conventions do not require us to speak performatively in order to make flat assertions. If it is raining outside, I can say, "it is raining," even though my saying so doesn't make it so. I think the mistake you are making is in assuming that we cannot assert that something is true unless we are 100% confident in our assertion.
Arguing over definitions is pointless if we're trying to name ideas. Arguing over definitions is absolutely necessary if there's disagreement over how to understand the stated positions of a third party. Establishing clear definitions is extremely important.
If someone has committed themselves to rationality, it's natural for us to ask "what do they mean by 'rationality'?" They should already have a clear and ready definition, which once provided, we can use to understand their commitment.
Sure, it is useful to ask for clarification when we don't understand what someone is saying. But we don't need to settle on one "correct" meaning of the term in order to accomplish this. We can just recognize that the word is used to refer to a combination of characteristics that cognitive activity might possess. I.e. "rationality" usually refers to thinking that is correct, clear, justified by available evidence, free of logical errors, non-circular, and goal-promoting. Sometimes this general sense may not be specific enough, particularly where different aspects of rationality conflict with each other. But then we should use other words, not seek to make rationality into a different concept.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I will not cease using perfectly good words.
I will, however, ask that people be prepared to define and explain the words when they use them. Words are problematic only when they become empty signifiers, labels attached to nothing.
Words can become less useful when they attach to too much as well as too little. A perfectly drawn map that indicates only the position and exact shape of North America will often be less useful than a less-accurate map that gives the approximate location of its major roads and cities. Similarly, a very clearly drawn map that does not correspond to the territory it describes is useless. So defining terms clearly is only one part of the battle in crafting good arguments; you also need terms that map well onto the actual territory and that do so at a useful level of generality.
The problem with the term "rationality" isn't that no one knows what it means; there seems to be wide agreement on a number of tokens of rational behavior and a number of tokens if irrational behavior. Rather, the problem is that the term is so unspecific and so emotionally loaded that it obstructs rather than furthers discussion.