Strictly speaking a scientific theory doesn't have to be consistent across all possible cases in order to be useful, only within its domain of applicability. Newton and Einstein's theories of gravity both have edge cases where they give a division by zero error; that doesn't stop them being useful in more typical cases.
According to a math professor I had in college, black holes aren't a mere divide by zero error; they're something even worse.
I think that this indicates it's better to specify the cases for which a given belief does apply, even if those cases can't be defined any better than "all the cases I've currently seen." Generally, double standards can be stated as coherent principles if one tries hard enough. With regard to prohibiting behaviors in society, people often use "slippery slope" arguments when they don't recognize the underlying principle. Take marriage, for example; those opposed to gay marriage often say that if we allow it, next we'll have people marrying their pets. The underlying principle that they miss is the ability to consent to marriage, which of course animals don't possess.
If the principle on which a person bases eir actions is, in essence, rationality (i.e. updating the map to fit the territory better), that might lead to far-past behavior seeming inconsistent with near-past behavior, but the overarching principle is still being maintained. I think that's a different sort of inconsistency than someone who takes actions that clearly contradict a principle ey still claims to hold. If we take a relatively small selection of time, during which a person does not claim to have updated eir map, we should see that ey acted consistently with eir stated principles. To me, immediate predictability is more important for trustworthiness than long-term predictability, because it signals a principled actor.
I like your reply a lot. =)
With regard to prohibiting behaviors in society, people often use "slippery slope" arguments when they don't recognize the underlying principle. Take marriage, for example; those opposed to gay marriage often say that if we allow it, next we'll have people marrying their pets. The underlying principle that they miss is the ability to consent to marriage, which of course animals don't possess.
Ah yes, slippery slope arguments. People tend to overuse them in many regions where they're not necessary. At the same time, however, there is sometimes some truth in slippery slope arguments (over a long period of time), since any deviation from the new norm will be seen as "extremist".
With both gay marriage and drug decriminalization, for example, I actually see the "slippery slope" argument as partially true (even though there is really no rational reason to continue to oppose either policy). I'm noticing a lot of people (on Internet forums, anyways) say that there really is no rational reason to oppose consensual relationships between any two people (including incest, as long as substantial birth-control is used) or decriminalization of all drugs.
So of course, these are slippery slopes. But they're really only slippery slopes because people don't understand the underlying principles here (the underlying principle being that there is no rational reason to punish victimless actions)
If the principle on which a person bases eir actions is, in essence, rationality (i.e. updating the map to fit the territory better), that might lead to far-past behavior seeming inconsistent with near-past behavior, but the overarching principle is still being maintained. I think that's a different sort of inconsistency than someone who takes actions that clearly contradict a principle ey still claims to hold
Also an excellent point. It's often difficult to clarify this overarching principle (I have one in mind, but I still can't precisely put it in words). Part of it is because I also expect to make mistakes, and to learn from them. And yes, I agree that it's a different sort of inconsistency (although it would be difficult to convince most people that)
With both gay marriage and drug decriminalization, for example, I actually see the "slippery slope" argument as partially true (even though there is really no rational reason to continue to oppose either policy). I'm noticing a lot of people (on Internet forums, anyways) say that there really is no rational reason to oppose consensual relationships between any two people (including incest, as long as substantial birth-control is used) or decriminalization of all drugs.
But is this a consequence of a dangerous slippery slope, or the consequence of the same legitimate arguments being true in those cases?
So of course, these are slippery slopes. But they're really only slippery slopes because people don't understand the underlying principles here (the underlying principle being that there is no rational reason to punish victimless actions)
In such cases it is agreement not understanding that matters.
Now, of course, it's important for a scientific theory to be consistent across all cases in its domain. Otherwise, the theory is useless. Of course, we can specify the preconditions for consistency, and we may be able to argue for a expansion (or reduction) in its domain as time goes on.
But what about behavioral consistency? Certainly, behavioral consistency makes it easier for people to predict what you'll do in the future. So people who are behaviorally consistent are easier to trust, so to speak.
But what if we want to be right? If we want to be consistent, we have to stick with a certain behavior and *assume* that the "utility" of holding to that behavior will be stable across time. But sometimes, we will have special cases where the "utility" from deviation will be greater than the "utility" from non-deviation (in many of these special cases, the optimal situation may be one where you hide that special case from the eyes of everyone else, maybe in the interests of "fairness" - since many people get outraged by violations in "fairness"). Of course, we can specify what these special cases are beforehand (for example, you may require a security clearance for access to certain types of information, or a license to create certain types of drugs). But we cannot reliably predict each and every one of these special cases (there may be cases, for example, where you would increase "utility" if you gave out information X to someone who didn't have clearance, for example)
You could call those "double standards". People will often accuse you of hypocrisy if you're inconsistent, but you could always guard yourself against those accusations by introducing double standards. Whether those double standards are defensible or not - that, of course, depends on other factors. There are many behaviors, for example, that one may have to prohibit the entire population from pursuing. But some of these behaviors can be responsibly pursued by a small subset of the population (the only problem is that if you use that phrase, potentially irresponsible people will pursue these behaviors, as people are prone to overestimate their abilities to self-regulate themselves).
Now, in terms of beliefs, some of these double standards come with updated information. You might say that "action X is 'bad'" across all possible cases, but then update your belief and say that "action X is 'bad'" in most cases, with a few exceptions. If you do that, some people may accuse you of straying from consistency. And intellectually, the most desirable option is to refrain from making blanket statements such as "action X is 'bad' across all possible cases". But in terms of consequence, blanket statements are easier to enforce than non-blanket statements, and blanket statements also have a psychological effect that non-blanket statements do not have (it is very easy to forget about the non-blanket statements). As a book from Simonton (2009) once said, the most famous psychologists (in history) are not the ones who are necessarily right, but those who held extreme views. Now, of course, fame is independent of being "less wrong". But at the same time, if you want to change the world (or "increase utility"), you have to have some attention to yourself. Furthermore, explaining the "exceptions to the rule" is often tl;dr to most people. And people might be less inclined to trust you if you keep updating your views (especially if you try to assign probability values to beliefs)