Epistemic Status: I've been thinking about this for a number of years, looking for steelmen of the position against hypocrisy. I haven't found anything satisfying yet, but maybe you can tell me why hypocrisy is actually bad? Barring that, I'm rather confident in the view expressed here.

"Hypocrisy is bad" is a deeply-rooted assumption in our culture. If I'm trying to give someone advice, I'll flinch away from saying things I don't do myself. For example, I don't have a driver's license, so I would hesitate before suggesting that someone else get one. If I do suggest it, I'll get kind of apologetic, and hedge my statement with "but I haven't gotten one yet". This, despite the fact that whether I've gotten one is almost irrelevant to whether they should get one. It seems to me that this habit is universal in American culture, and I'd be surprised (and intrigued!) to hear about any culture where it isn't.

I think the anti-hypocrisy norm is likely based on a blame/praise model of advice. If advice is always norm-enforcing criticism, and is tied to how many "social points" you score, then the anti-hypocrisy norm prevents a particular type of social exploit. A hypocrite can enforce, and claim to believe in, norms which they themselves break. They score social points, gaining status and power, based on an unfair self-favoring application of rules. Rulers who are above the laws they themselves enforce violate our fairness norms; they are the very image of corruption.

However, this problem can be addressed in a more specific way by calling out unfairness, building power structures with transparency and accountability in mind, and deserving trust as a community. The cost of the anti-hypocrisy norm is too high; it throws out too much useful advice, constraining the directions in which we think when we're trying to help one another.

Rob Bensinger already wrote a whole post on why hypocrisy is a bad concept, which I also endorse. Ironically, though, he has a different concept of hypocrisy than me, so my argument against the concept is somewhat different. He treats "hypocrisy" as "inconsistency", and points out that winning an argument by pointing out inconsistency is not very informative: showing that someone holds two incompatible views doesn't tell you which of the two is right.

I think of "hypocrisy" as specifically referring to an inconsistency between words and actions.

Put simply: inconsistency between words and actions is no big deal. Why should your best estimate about good strategies be anchored to what you're already doing? The anti-hypocrisy norm seems to implicitly assume we're already perfect; it leaves no room for people who are in the process of trying to improve. We know people have akrasia. Also, akrasia isn't necessarily the only reason why actions may differ from words. It's important to be able to think and talk about better ways of doing things without necessarily changing courses at the drop of a hat. This is especially true in group coordination situations. Scott Alexander argues that anti-hypocrisy norms for journalists prevent them from suggesting improvements to society.

Flinching away from giving advice you don't yourself follow is accompanied by a knee-jerk reaction which discounts advice from others if we realize that the advice was hypocritical. Even though I've been trying to root out this mental habit for a long time, I still catch myself updating away from advice when I realize that the person who gave it doesn't follow it. It feels relevant; decisive, even. Upon examination, though, it isn't.

One counter-argument is: if the hypocrite lacks experience with what they preach, they likely don't know what they're talking about. There may be obstacles to following their advice which they simply haven't experienced.

This may be relevant, but if so, I emphasize that it should be considered in and of itself, not as part of an anti-hypocrisy flinch. Why? Because personal experience is not the only way to gain knowledge. The anti-hypocrisy norm is founded in part on a non-Bayesian model of knowledge which emphasises personal experience and vivid stories from close in your social network. We should instead asses another person's beliefs on merits.

It's also largely contradicted by Beware Other-Optimizing. The advice-giver successfully following the advice themselves is not much evidence in favor of the advice. So, checking for hypocrisy is not a very good test of advice quality.

Anti-hypocrisy norms do provide a safeguard against using knowledge of cognitive biases to become more effective at motivated arguing, if you can successfully stop yourself from calling out biases in others when you haven't conquered them yourself. However: this, too, is a very weak heuristic. Conquering a bias in yourself should not give you free license to use that particular bias as a claim against others. Nor should failing to conquer a bias stop you from trying to help others conquer it.

One reason for the anti-hypocrisy norm which I do find more concerning is that an inconsistency between words and actions is a good indicator of dishonesty, so a norm against it may be a significant safeguard against liars. Again, though, the heuristic seems over-emphasized. There are discrepancies between words and actions which suggest that someone is lying, and there are those which don't. What I see isn't people considering whether hypocritical advice is coming from liars. What I see is an unthinking knee-jerk reaction which discounts hypocritical advice.

Putting your money where your mouth is, overcoming akrasia, doing the best thing you can figure out how to do, seeking to understand and honestly state the motives behind your actions: there are all things which we value, which point toward minimizing the discrepancy between words and actions. Anti-hypocrisy norms don't help us work toward these things, however. If anything, they Goodhart on minimizing the discrepancy, by encouraging us to align words with actions in cases where it prevents maximal honesty and truth-seeking.

So, what do you think? Have I failed to Chesterton-fence this one? Why do people flinch away from hypocrisy? Is hypocrisy bad? Am I failing to see something? Is my hufflepuff cynicism blocking me from seeing the advantages of holding people to their words / holding people to higher standards?

Hat tip: The seed of this post was planted long ago by my friend Thomas Kroll, who once said that the one idea he would have people remember him for was that hypocrisy isn't bad.

ETA:

Thanks to a number of discussions in the comments, several important distinctions have been called out, which I was conflating.

Most importantly, I was ignoring the question of norms and status claims, and arguing as if all advice is about the epistemic question of whether it would be useful to do some thing. Hypocrisy about norms is problematic, because it is easy for a hypocrite to set up unfair (self-serving) norms which can't be called out as unfair/unjust in any other way than to call out the hypocrisy itself (due in part to limitations on what can be publicly called out -- other objections to the rule may simply not be clear/defensible). There's a case to be made that this in itself justifies the anti-hypocrisy flinch, because it is too difficult in practice to detangle norm-setting claims from non-norm-setting ones.

I also conflated my point with whether "hypocrisy is bad". It should be noted that hypocrisy absolutely implies that something unfortunate is going on: either the hypocrite is lying, or mistaken, or taking suboptimal actions which they have enough information to improve. The issue, metaphorically, is that this is just like saying that if you're travelling, then you must either not be where you want to be or headed in the wrong direction. Avoiding travel doesn't necessarily make you where you need to be; avoiding hypocrisy doesn't necessarily help you say the right words or take the right actions.

Other important distinctions:

Joachim Bartosik points out that I'm not clear on which of these questions I'm asking:

1. Are there good reasons to be suspicious of advice that advice giver doesn't follow themselves?
2. Is there a good reason to support social norms against hypocrisy?
3. Are there good reasons to avoid giving advice that I don't follow myself?

To this, I might add "are there good reasons to avoid taking actions which go against something I said?"

Along similar lines, my discussion with Said resulted in a number of possible rules around hypocrisy which we might follow or question:

1. hypocrisy => what hypocrite said is wrong

2. hypocrisy => what hypocrite said is not evidence

3. hypocrisy => hypocrite is blameworthy

4. hypocrisy => hypocrite is to be viewed with high suspicion, on priors

5. hypocrisy => EITHER what hypocrite said was wrong, OR they are blameworthy

6. hypocrisy => hypocrite is to be treated as a hostile agent for the purpose of evaluating their words in this context

7. hypocrisy => call out hypocrisy

8. it's a conversation about norms, and the hypocrite is making an implicit status claim with their words => hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite

My primary intention in the post was to argue against #1 and #2. I still think they are bad rules; hypocrisy provides a small amount of evidence against a claim, but I think it's highly over-emphasized in practice. I also disagree with all of the others, except for #8, which I think is true because of the earlier-mentioned point about hypocrisy and norms.

New Comment
78 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As you say, there are certainly negative things that hypocrisy can be a signal of, but you recommend that we should just consider those things independently. I think trying to do this sounds really really hard. If we were perfect reasoners this wouldn't be a problem; the anti-hypocrisy norm should indeed just be the sum of those hidden signals. However, we're not; if you practice shutting down your automatic anti-hypocrisy norm, and replace it with a self-constructed non-automatic consideration of alternatives, then I think you'll do wors... (read more)

2abramdemski
Can you give a typical example, for yourself (maybe look out for examples in daily life and give one when it comes up)? I think, for myself, the anti-hypocrisy flinch is causing me problems in almost every case where I consciously notice it. So my position is really more like "notice that this response is mostly useless/harmful. Also, in every case I can think of where it's not, you could replace it with something more specific." For example, it's often happened that a friend is giving advice and admits that they don't do the thing themselves. I notice that in the social context, this feels something like a 50% decline in the credence given to the advice -- it feels very real. But, when I notice this, it usually doesn't seem valid on reflection. Or, maybe a friend said something and I later start thinking about ways in which that friend doesn't live their life in accordance with such a statement, and I start experiencing righteous indignation. Usually, when I reflect on this, it isn't very plausible. I'm actually stretching the meaning of their statement, and also stretching my interpretation of how they live their life, in order to paint a picture where there's a big mismatch. If I talked to them about it, they would predictably be able to respond by correcting those misinterpretations -- and if I held on to my anger, I would probably double down, accusing them of missing the point and not trying to charitably understand what I'm saying. There's usually some real reason for the annoyance I'm feeling with my friend, which has only a little to do with the hypocrisy accusation.
1query
I will see if I can catch a fresh one in the wild and share it. I recognize your last paragraph as something I've experienced before, though, and I endorse the attempt to not let that grow into righteous indignation and annoyance without justification -- with that as the archetype, I think that's indeed a thing to try to improve. Most examples that come to mind for me have to do with the person projecting identity, knowledge, or an aura of competence that I don't think is accurate. For instance holding someone else to a social standard that they don't meet, "I think person X has negative attribute Y" when the speaker has also recently displayed Y in my eyes. I think the anti-hypocrisy instinct I have is accurate in most of those cases: the conversation is not really about epistemics, it's about social status and alliances, and if I try to treat it as about epistemics (by for instance, naively pointing out the ways the other person has displayed Y) I may lose utility for no good reason.
2abramdemski
I think I may agree with the status version of the anti-hypocrisy flinch. It's the epistemic version I was really wanting to argue against. ... That doesn't seem like treating it as being about epistemics to me. Why is it epistemically relevant? I think it's more like a naive mix of epistemics and status. Status norms in the back of your head might make the hypocrisy salient and feel relevant. Epistemic discourse norms then naively suggest that you can resolve the contradiction by discussing it.
1query
Ok yeah, I think my concern was mostly with the status version-- or rather that there's a general sensor that might combine those things, and the parts of that related to status and social management are really important, so you shouldn't just turn the sensor off and run things manually. I was definitely unclear; my perception was the speaker's claiming "person X has negative attribute Y, (therefore I am more deserving of status than them)" and that, given a certain social frame, who is deserving of more status is an epistemic question. Whereas actually, the person isn't oriented toward really discussing who is more deserving of status within the frame, but rather is making a move to increase their status at the expense of the other person's. I think my sense that "who is deserving of more status within a frame" is an epistemic question might be assigning more structure to status than is actually there for most people.
2abramdemski
That's a good point. Given that I didn't even think of the distinction explicitly until engaging with comments, it seems really easy to confuse them.

I have several problems with your view.

First, the auxiliary (but still important) one:

We know people have akrasia.

We do? What does this mean, other than “we know people sometimes don’t do the things they claim to want to do”? It’s not like “akrasia” is something we can find in the territory, right? It’s just a (fairly sloppy and probably pretty harmful) part of our map.

Second, the main one:

People can, and do, lie. People can, and do, deliberately deceive us, in small ways and in large ones. People manipulate us. People say or do things that are designe... (read more)

7abramdemski
You've updated me a little in your direction, but only intuitively, not in terms of explicit models. So let's try to hash this out. This reminds me of my post on intellectual trust. It sounds like you are talking about operating in "face culture" as defined in that post. I agree that you typically have to use face culture when a large number of people get involved, or when strangers get involved. However, I don't think everything is like this all the time, and when I write about rationality, I mainly think about environments where you can be intellectually honest. But still, I'm not convinced that the anti-hypocrisy norm is necessary even in untrustworthy environments. Face culture cares about status and appearances. What you agree with or disagree with is about the social hierarchy and who you're allied with. Everyone has to worry about these dynamics to get by. It's important not to say anything too critical about the boss, to say nice things to teammates, etc. I said: But in face culture, advice actually is blame. Or, something somewhat more complicated. Because any suggestion could be a bid to create a social norm (making not following the suggestion blameworthy), every suggestion is tinted with the possibility of blame. Advice can also be a way show power over a person. Rejecting the advice can signal rejecting an implied power structure. Seeking advice from someone signals respect, and may have more to do with what you think of their status, rather than how good you expect their advice to be. So, giving and taking advice is involved in complicated social moves. ...but, if I'm dealing with face culture (which I am often enough), I already assume all that stuff is going on. In effect, everyone is already a hypocrite, including me, just for interacting with face culture. Whether someone is a hypocrite isn't a relevant signal. The question in face culture is whether a particular act of hypocrisy is something you can and should call people on. So, in the fram
2Said Achmiz
Your reply, to be honest, is perplexing to me. I’m not talking about “face culture”, or any other thing that you can circumscribe, name, and declare to not apply to Less Wrong or to whatever other community you like. I’m talking about people, period. Do you think everything I’ve said doesn’t apply here, on LW? Of course it does. It applies not one iota less! Do you think it doesn’t apply in “meatspace” rationalist communities? It manifestly, unquestionably does. Those are, in fact, the communities I was thinking about when I wrote my examples! I invite you to re-read the paragraph of the grandparent comment that begins with “The problem with your view, in essence …”—but this time, read it with the understanding that I am talking about precisely every community or social context that you would describe as an environment of “intellectual trust”, “intellectual honesty”, etc.
2abramdemski
Well, setting aside the question of whether it's possible to escape these dynamics ever, I also made some points addressing the question of whether the anti-hypocrisy norm is useful when such dynamics dominate. What did you think of that part? One of the arguments I made was that, in these cases, you have other things you can call people out on that point more directly at the problem, like unfairness or dishonesty. The hypocrisy might be part of your argument that something bad must be going on, as in the example where someone tries to get away with claiming their actions are good and their words true and the two are very different. It just seems like putting negative valence on hypocrisy is too simplistic. Maybe we should try and be more specific about the disagreement. There are several inferences one might make from hypocrisy: 1. hypocrisy ⇒ what hypocrite said is wrong 2. hypocrisy ⇒ what hypocrite said is not evidence 3. hypocrisy ⇒ hypocrite is blameworthy 4. hypocrisy ⇒ hypocrite is to be viewed with high suspicion, on priors 5. hypocrisy ⇒ EITHER what hypocrite said was wrong, OR they are blameworthy I don't endorse any of these heuristics, but I come close to endorsing #5 -- either what a hypocrite says is wrong, or their actions are suboptimal (which may or may not indicate blameworthiness, but must be at least a little bad). In the post, I was mostly arguing against #1 and #2. I think we might be mainly discussing #3 and #4, instead.
4Said Achmiz
Having re-read your comment, I am not actually sure which part of it you’re referring to. Would you mind quoting the relevant bits? (And maybe paraphrasing/summarizing them, also, because the part of your comment that I think you might mean read to me as rather confused—which probably means that I didn’t understand it.) As to the inferences you list—I wouldn’t endorse any of them either, not because I think they’re wrong per se, but because they miss the point. (#4 comes the closest, perhaps, to being relevant.) Rather, I would say: 6. hypocrisy => hypocrite is to be treated as a hostile agent for the purpose of evaluating their words in this context In other words, having uncovered our interlocutor’s hypocrisy, we immediately discard any assumption that the intent behind their words is either (a) as it appears on the surface, or (b) friendly to us. Instead, we now assume that the hypocrite’s words have a purpose that is self-serving (possibly at our expense, though without harming us being the explicit goal), or actively hostile, or both. (Certainly the former is likely to be much more common than the latter, in any but the most toxic and dysfunctional social environments.) Now, having made this assumption, what can we say about your inferences #1–5? 1. Are the hypocrite’s claims wrong? Perhaps, perhaps not. What’s important is that the fact that the hypocrite is making said claims cannot be reliable evidence of their truth—as it would have been, under the (now-discarded) assumption that the hypocrite is a friendly agent, who has honest and honorable intentions toward us (and thus would not deliberately lie, and would in fact make an effort to speak truth as he sees it). 2. This is covered under #1. 3. Blameworthiness and praiseworthiness are moral judgments, which may be made only relative to some moral framework. Whether your preferred moral framework judges the hypocrite to be blameworthy, or praiseworthy, or neither, is not an inference, but a judgmen
2abramdemski
I'm not sure about that. You are talking about escaping these dynamics, whereas I am more talking about living with them. You think(?) an anti-hypocrisy norm helps to escape the dynamic, whereas I don't. It isn't clear to me who thinks they are easier to escape. I think the main tools for escaping them are forgiving tit-for-tat and shared goals. "Forgiving tit-for-tat" means extending others slightly more trust than they extend you (to hopefully drag things toward a positive dynamic). Shared goals are a powerful, but also dangerous (can involve cult-like dynamics).
2Said Achmiz
I absolutely don’t think that an anti-hypocrisy norm helps to escape the dynamic. I already said what lets you escape the dynamic—and in such situations as I outlined, one needs no such norms; they are operative, of course, but they’re not really what stops hostile behavior. It’s simply that such behavior generally doesn’t take place, in such contexts. It seems clear to me that you think these dynamics are easier to escape than I do. “Living with” the described dynamics is precisely what I am talking about. That’s what the anti-hypocrisy norm is for. Neither “forgiving tit-for-tat” nor shared goals help very much, in my view. They are beneficial in their own right, for other reasons, but you absolutely still need the norm against hypocrisy.
2abramdemski
Hmmmm. I almost want to agree with you on #6. I wrote up about half of a response based on agreeing about #6. But, I can't quite agree. I don't actually infer all that just from a divergence between words and actions, and I think it would be a bad idea to have it on that trigger. I do agree with something close to #6. I would think of it as "inferring that I'm dealing with face culture". Inferring that words mean less that they say, that there's a social ritual going on. Motives and beliefs are being stated according to the rules of the game, not according to truth. My disagreement with #6 is that I don't trigger it based on a divergence between words and actions. There are other definitions of "hypocrite" which I do trigger it on, like the definition proposed by Lukas_Gloor, or (less certainly) hypocrisy with the added proviso mentioned by Dagon. There are also a lot of other subtle or not-so-subtle hints which trigger this for me. But I suspect we have a larger disagreement over some stuff beyond #6. In particular, when applying my version of #6, I basically never call it out. So, we can consider the following: 7. Hypocrisy => call out hypocrisy Every time I have applied #6 and then tried to apply #7, things have gone rather poorly for me. This is why I said, of an immune response against hypocrisy: I was (and still am) interpreting your "immune response" as involving #7. Is that accurate?
2Said Achmiz
I don’t agree that this is close to my #6 at all. Again, I think that your attempt to circumscribe what I’m saying, and to claim that it’s an artifact of some specific sort of “culture” that you can name, and set apart from some other contexts with a different culture, is misguided. As far as “calling out” hypocrisy goes, well, that’s as may be. Your track record with calling out hypocrisy does not seem implausible to me, certainly. But then, I never suggested that calling anything out is strictly necessary. It may be helpful sometimes, but not always, and perhaps not even often. That, however, is different from trying to change the norm against it… It is not, as you see. Calling out hypocrisy may be part of the manifestation of the immune response in any given situation, or it may not be. It depends on many things. (For one thing, callouts may take many forms. For example, they make take the form of subtle, and deniable—perhaps even apophatic—implications. If there’s a solidly entrenched, universally shared norm against hypocrisy, then such a gentle approach can serve quite well. In such social contexts, one prefers to avoid the label of hypocrite even if one is able, after all, to defend against the charge; the ideal is not only blamelessness, but manifest, unimpeachable blamelessness. The “appearance of impropriety” parallel may once again be drawn.) I certainly disagree with this. Again, I just don’t buy the “benign hypocrisy” idea. (It is, in any case, socially corrosive even if “true” in any given case. I put “true” in scare quotes, of course, because whether to categorize the behavior as “benign hypocrisy” is precisely at issue; what’s actually true is some empirical facts of the matter.) [emphasis mine] I agree that the bolded part is a danger, but it’s not so great as a danger as you suggest, and not nearly as great a danger as the reverse. Partly this is because I think that you exaggerate, w.r.t. the un-bolded part of the quote. Putting on blinders i
2abramdemski
[how did you get nested quotations to work? I can't seem to manage that...] I think this disagreement probably was an illusion generated by our differing definitions of hypocrisy: So I think we agree that there are benign divergences of between words and actions. I don't know what your definition of hypocrisy is to evaluate whether we disagree beyond that.
2Said Achmiz
GreaterWrong uses a raw Markdown editor, where you can do nested quotations in the usual Markdown way. (If you don’t know or recall the format of the quotation markup, you can use the quotation button on the GUIEdit toolbar: select the first level of the quote, click the button; select the quoted text plus the next level, click the button; etc.) As for the rest of your comment—see my reply in the other thread. I don’t think there is any difference in our definitions of hypocrisy.
2abramdemski
I meant for the immediately following part of that comment to serve as a restatement of the bit I was most interested in your response to, IE, That was intended as an elaboration of the part where I reacted to your examples by (briefly) stating how I thought there were better things to call people out on in each case. (I'll try to reply to everything else later, thanks for continuing the discussion!)
5Said Achmiz
I see, thanks. So, the problem with that line of reasoning is that it would work if hypocrisy were sometimes a co-occurrence of a bad thing, but sometimes of a good or neutral thing. But it does not seem that way to me—not to any degree that matters, anyway. I do not take seriously the “akrasia” argument. Let’s consider a scenario or two: Scenario 1a A: Everyone ought to do X. B: Do you do X? A: Oh, no, I don’t do X, but really I should. Akrasia, you know. Scenario 1b A: Everyone ought to do X. I don’t do X myself, but I really ought to. I’m trying, but failing. Akrasia, you know. ---------------------------------------- Scenarios 1a and 1b are slightly different. In scenario 1a, A could’ve gotten away with advocating X without his hypocrisy being revealed. That is strictly more blameworthy than scenario 1b, where A admits the disconnect between his words and his actions, but insists that it’s a failure of willpower (or whatever it is that “akrasia” in fact maps to). Notice what is happening: A is introducing (or seeking to introduce) a new norm of behavior. Should this norm be accepted, conforming to the norm will be socially rewarded, and deviation from the norm will be socially punished. Of course, conforming to the norm is costly (which is a large part of why it’s socially rewarded). Now, suppose the norm is accepted. Should A be socially punished for deviating from it? If not, why not? Well, in practice, what often happens in such a case is that A might be socially punished a little, but not a lot. You see, A believes that this norm should exist, and he advocates the norm, and he even admits that he is flawed in his deviation from it—these are praiseworthy behaviors, aren’t they? But in this case A has gotten something for nothing. Talk is cheap; what does it cost A to speak as he does? And he gets praise for it! Everyone else, of course, must choose between conforming to the norm (which is costly in resources) and deviation (which is costly in s
2abramdemski
To what extent do you think there is still a disagreement between us, if I'm in agreement about the rule 8. "If it's a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite." I know we have pending points unrelated to that (IE, the status of #6), but it seems like bringing out the distinction of #8 may change the conversation. Certainly I was ignoring that distinction before. So, does your position on the disagreement about #6 change, with that in mind? If not, my response to the scenarios which you mention above is that (unless I'm mistaken) they fall under #8, so it seems like I don't need anything like #6 to get them right.
5Said Achmiz
The problem with your #8 is that it’s too specific. What you seem to be doing here is taking a fairly general analytical framework, extracting two specific conclusions from it, and then replacing the framework with the conclusions. This is, of course, problematic for several reasons: 1. The conclusions in question won’t always hold. Note that inserting qualifiers like “fairly strongly” (and otherwise making explicit the idea that the conclusions are not an in-all-cases thing) doesn’t fix the problem, because without the framework, you don’t have a way of re-generating the conclusions, nor of determing whether they hold in any given case. 2. There could be (indeed, are likely to be) other conclusions which one may draw from said analytical framework, beyond the ones you’ve enumerated. (Turning an algorithm or heuristic into a lookup table is always problematic for this reason—how sure are you that you’ve enumerated all the input-output pairings?) 3. Because the analytic framework is itself only a heuristic (as we have discussed elsethread), it’s dangerous to elevate any particular conclusions it generates to the status of independent rules (or even heuristics); it obscures the heuristic nature of the generating framework. In this case, the specific problem is that #6 is highly amenable to having its output affected by other things that we know about the agent in question (i.e., the alleged hypocrite), in various fairly straightforward ways; whereas with your #8, it’s not really clear how to apply case-specific knowledge to modify the given conclusions (and so, if we do so at all, we’re likely to do it in an ad-hoc and imprecise manner—some sort of crude “social status override”, perhaps). Of course, your #8 is certainly a good distillation of a particular sort of quite common hypocrisy-related issue. But beware of attempting to replace the generalized anti-hypocrisy norm with it, for the reasons I’ve given. ---------------------------------------- One thing
4abramdemski
I agree with your remarks about this general pattern, but the mitigating factor here is that when a powerful heuristic generates conclusions in specific cases which are clearly very wrong, it is useful to refine the framework. That's what I'm trying to do here. Your objection is that my refinement throws the baby out with the bathwater. Fine -- then where's the baby? I currently see cause for #8, but you see #8 as neglecting a bunch of other useful stuff which comes from the general anti-hypocrisy norm. Can you point to some other useful things which don't come from #8 alone? But, perhaps it is premature to have a "where's the baby?" conversation, because you are still saying "where's the bathwater?" IE, you don't see need to throw anything out at all. Maybe it's not very cruxy, but this part didn't make sense to me. If it's dangerous to elevate #8 to the status of a heuristic because it might be taken as a rigid rule, isn't it similarly dangerous to elevate general anti-hypocrisy to the level of heuristic for fear of it becoming rigid? That's basically my whole schtick here -- that the general norm seems to create a lot of specific behaviors which are silly upon closer inspection. Your argument in the above paragraph seems to be begging some kind of assumption which gives the general norm a radically different status than any more specific variations we are discussing. Maybe it does require a radically different status, but, that seems like the subject under debate rather than something to be assumed. My argument was an either-or, stating why I didn't see the norm as useful in high-trust or low-trust situations. But I agree that I have to address the case where the norm is useful precisely because its existence prevents the sort of cases where it would be needed. But, to this I'd currently reply that I don't see what's captured by the general norm and not by #8. So the baby/bathwater discussion seems most useful at the moment: * (if we can think of ways forwa
2abramdemski
I think I might just agree with the status version of "flinching away from hypocrisy". IE, if it's a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite. (I'll think about it more, and probably put an addendum at the end of the post calling out this and other distinctions which have been raised.)
3abramdemski
Yeah, that's a decent point. I'd taboo it here by replacing "we know people have akrasia" with: it isn't plausible to claim that every divergence between words and actions is due to malintent -- even in cases where a person is aware of the difference between what they say and what they do, and still persist, there are many cases in which no desire to be duplicitous seems to be present. For example, if someone who lives alone keeps forgetting to take out the garbage and keeps saying they should do it. There's also some fuzziness in my definition of hypocrisy as a difference between words and actions. Words can accomplish things, and actions can communicate things. The distinction should be more like, a discrepancy between what's communicated and what's accomplished.
3Said Achmiz
This is too strong and too specific a reading of what I mean. One need not “desire to be duplicitous”; one need only have an insufficient dedication to truth; or an insufficient level of personal integrity; or an insufficient self-awareness, and perhaps a bit too much desire to be seen as virtuous; or an instinctive tendency (as so many people have) toward status-seeking behavior, and not the moral fortitude, the sense of justice, to counter it. These things are so common as to be near-universal. The anti-hypocrisy norm guards against these ubiquitous traits; it prevents them (imperfectly, alas) from metastasizing and resulting in exploitation of the honest and the virtuous by the less-honest and less-virtuous. It is precisely because explicit “desire to be duplicitous” is not necessary, and because mere insufficient virtue suffices, that the eternal vigilance of the anti-hypocrisy norm is so critical.
2abramdemski
I wasn't trying to give a reading of what you meant, there. I was trying to taboo "akrasia" in my argument, in response to your initial point objecting to akrasia as the relevant concept. So the relevant question is whether your objection to my use of "akrasia" in the argument also applies to my use of "isn't plausible to claim that every divergence between words and actions is due to duplicitous malintent" in the implied revised argument. IE, your statement about it not being something in the territory. The point you are making does make some sense, though. I think my reply along the deeper branch of conversation will probably engage with it more profitably.
2Said Achmiz
I suppose I’m a bit confused, then. Certainly it’s true that it “isn’t plausible to claim that every divergence between words and actions is due to duplicitous malintent”. But who is claiming otherwise? I am saying that “hypocrisy” refers to—and guards against—a broader spectrum of behaviors than that. If you say “we know that not every divergence between words and actions is due to duplicitous malintent”, well, I agree with you, but if I never claimed that in the first place, then what purpose does the objection serve? To put it another way, “the implied revised argument” doesn’t seem to work as an argument against hypocrisy norms. What am I missing?
2abramdemski
Ohhhh. So. For me, this entire conversation has been predicated on the definition of "hypocrisy" I used in the post, IE, divergence of words form actions. You never suggested a different definition, that I caught. All of my objections were, roughly, "divergence of words from actions just doesn't seem like the right heuristic". I don't know where that puts us with respect to everything else we've discussed. [comment radically edited from a previous version when I realized what was going on here]
2Said Achmiz
Hold on—I think you might’ve misinterpreted my comment. The “that” in “a broader spectrum of behaviors than that” refers to “duplicitous malintent”, not to “divergence between words and actions”. I think our disagreement is exactly as it has appeared until now. I think that “divergence of words from actions” is the right heuristic.
2abramdemski
In that case, what about the following tabooing of Akrasia: "divergence between words and actions which is not due to hostile agency in the sense of #6" (IE, your "purpose that is self-serving (possibly at our expense, though without harming us being the explicit goal), or actively hostile, or both")? Do you agree or disagree with "it's obvious that not all hypocrisy is due to hostile agency in the sense of #6"? IE, to what extent is hostile agency universal to hypocrisy, vs a strong heuristic?
2Said Achmiz
Separately from the points I raise in my other reply, I think this proposal is problematic, because it does not at all fit the pattern of usage of “akrasia”. As far as I’ve seen, the word “akrasia” is used to explain divergence between expressed intent or desire and action; this is different from divergence between advice and action, or advocacy of norms and action, or prudential claims and action, etc. The latter is hypocrisy, the former is what rationalist-type folks call “akrasia”. The two may co-occur, but they are not identical. Now, perhaps this was your intent…? But if so, I don’t think it’s a good idea; for all that I disagree with the concept of akrasia (as employed on Less Wrong and in similar places), I don’t think that appropriating the word for an entirely different purpose is wise.
2abramdemski
I agree that it doesn't fit the usual definition; I was asking about the modified definition since it is what's actually relevant to your argument (and as you noted, the argument I actually made in the post there is not very relevant to the view you're putting forward).
2Said Achmiz
Then my comment stands—I think it is unwise to appropriate the term for this meaning. It can only serve to confuse us, and everyone we talk to.
2abramdemski
That's very much not how I use words or think they should be used. Words can be understood in terms of their use in the context of a discussion, and that's often them at their most useful.
2Said Achmiz
Ok. I think that’s an absolutely horrible way to use words, though this is probably not the best context to discuss that.
2abramdemski
I made a post specifically about the disagreement.
2abramdemski
We could discuss it on the post I cited.
2Said Achmiz
If you look back to my comment in question, you may note that I did not offer “hostile agency” as a causal explanation for hypocrisy, but as an assumption on which to base evaluation of the hypocrite’s words. Given this, it makes little sense to speak of hypocrisy being, or not being, “due to hostile agency”—even if you mentally perform the proper transformation, the explicitly causal language will inevitably put you in the mind of thinking about malicious intent, when in fact that’s not what we’re talking about. (The admittedly-awkward phrase I would substitute for “hostile agency” is “motivations and behavior such that assumption of hostile agency yields correct analysis”, which makes it clear that we’re not in fact imputing any deliberate malice to anyone—not necessarily, anyway.) That being said, if what you mean by this is indeed something like “will this assumption sometimes lead you drastically astray”, then the answer is clearly “yes” in a strict sense—but “sometimes” in this case is “not very often at all”. As I mentioned in another comment—first, there are degrees of hypocrisy; and second, that the stigma attaches even to, in some sense, “innocent” hypocrisy, is a good thing—a feature, not a bug. (This latter consideration would seem to be a non sequitur in a strictly epistemic discussion; but (a) we are also talking about norms here, and (b) norms of this type can affect group epistemics—so unless the discussion is not only strictly epistemic, but strictly individual-level epistemic, the latter consideration is, in fact, quite relevant.) (I also have another, unrelated, objection, which I’ll deal with in a sibling comment.)
2abramdemski
Ok. I agree with your clarifications (I didn't have in mind a literal "due to hostile agency", but rather “due to motivations and behavior such that assumption of hostile agency yields correct analysis”, which is what "due to hostile agency" would have to mean in the context of what we've already clarified about #6). I was asking just to make sure whether you thought the connection was absolue or a heuristic, and didn't really plan to continue this particular line of inquiry beyond that, but now it seems possibly good to discuss the frequency question which you raised. After all, the quality of a heuristic does depend on how frequently it is right. My model is that in most contexts, it already makes sense to infer "hostile agency" in this sense, IE, agency which is not necessarily out to get you but which isn't particularly looking out for your interests in the interaction and needs to be watched for that reason. However, with respect to the specific intentions behind hypocritical words, I can only think of a few examples concentrated in certain individuals where it was really associated with that. To my memory, it seems like hypocrisy tends to be either incidental to a preexisting hostile agency, or basically meaningless (easily explained by other reasons).
2abramdemski
Ah, ok.

I think you're mixing a few questions that seem distinct to me:

1. Are there good reasons to be suspicious of advice that advice giver doesn't follow themselves?

2. Is there a good reason to support social norms against hypocrisy?

3. Are there good reasons to avoid giving advice that I don't follow myself?

@1. I think hypocrisy is always a evidence for the advice being poor. It's not a very strong evidence. If I can easily check sources, reasoning and evaluate results of taking the advice it's probably not worth worrying much about it.... (read more)

4abramdemski
The main intention behind my post was to argue that people over-react on #1, which is bad epistemics, and also overreact with respect to #3. I think we roughly agree on #1 and #3. I'm much more uncertain about #2. I've been making the claim that the norm is the cause of the problems with #1 and #3, and should therefore be removed. But, the claim was sort of incidental to my original point, and I didn't think through it so much. There are also some other distinctions which might be drawn out. I'll think about editing the post to clarify all the possible claims.

I think the generalised flinching away from hypocrisy in itself, is mainly a status thing. Of the explanations for hypocrisy given.

  • Deception
  • Lack of will power
  • Inconsistent thinking

None of them are desirable traits to have in allies (at least visible to other people).

2abramdemski
Yeah. Hypocrisy can't be an ideal situation -- it always signals that something unfortunate must be going on. I might even agree with the status version of flinching away from hypocrisy? Particularly in the case that the hypocrite was saying something that made an implicit status claim initially.

I think there is a near/far thing going on here. If someone tells you to do something, then this gives you more evidence about their far mode beliefs. If someone does something, then it gives you more evidence about their near mode beliefs. If near and far beliefs disagree, which should we trust?

I think you're missing a key element. It's not hypocrisy if it's acknowledged and explained. The norm is against hidden or unexplained variance between word and deed, not against the variance itself.

Nobody's going to think twice if you say "I don't have a licence because Y, which doesn't apply to you and you should probably get one". Only if you say "getting a license is great and worth sacrificing for, but I haven't bothered" will people notice the apparent contradiction and downweight your opinion... (read more)

5philh
I don't think I'd have the anti-hypocrisy flinch in that situation. I'd have it if they say "everyone should get a license, if you don't you're just lazy" and fail to acknowledge that they don't have one themself. (If I know they know I know they don't have a license, there may be no need to explicitly acknowledge it. Then the statement becomes self deprecating.) The post talks about hypocrisy "as specifically referring to an inconsistency between words and actions", but it also talks about the anti-hypocrisy flinch, and it's not at all clear to me that the flinch is caused by the thing that's being defined as hypocrisy. Maybe I'm atypical in this regard.
2abramdemski
I don't know if the flinch is caused by exactly the definition of hypocrisy given, but it does seem to me that many/most people would flinch away from the license example reflexively.
2abramdemski
Thanks for trying to supply reasons for the norm! I agree that hidden motives are even more concerning, but I don't think that's the heuristic I see people applying in practice. It would explain why people scramble to un-hide the fact that their words and actions disagree, but I don't think it explains the fact that people will themselves act as if their advice matters less if it doesn't match their actions, or the way other people seem to discount advice from people whose words don't match their actions. As with other possible problems an anti-hypocrisy norm may prevent, I think I'd rather deal with "hidden motives" as its own problem. After all, those can already be a problem when no mismatch between words and deeds is apparent. If I did advise someone to get a license, it could well be in the second category. Left unspoken, but not difficult to infer, would be that I haven't gotten a license because I'm lazy, or I have an ugh field around it, or something along those lines. That interpretation of you example goes against what you say in the previous paragraph: if the norm is only against hidden or unexplained variance, why discount my advice? Or, if I interpret the example as hiding or refusing to state my own reasons for not getting a license, why is that so relevant? If my case _is_ different than yours, it might be helpful to have a discussion about that to clarify my models, but it's not automatically the most relevant discussion. My advice should be discounted on general beware-other-optimizing grounds, but not specifically because I lack a license with no justification to differentiate my case. Others might look poorly on me for my akrasia, but that's true whether or not I advise other people to do better. My motives may be suspect, but they won't always be; I think extra outside reasons are needed for that to be an important consideration (and I see people having this flinch response in cases where that's not the case at all). Flinching away from hyp
2Dagon
These options aren't exclusive. I can discount hypocritical advice _BOTH_ on other-optimizing grounds _AND_ on grounds that self-contradiction indicates error somewhere.
2abramdemski
Well, I agree, but in this case it seems to me that the one does explain away the need for the other.

Separately from all my other comments on this topic, I’d like to mention a form of “hypocrisy” which is not best understood by treating the “hypocrite” as a hostile agent. (My reason for using the scare quotes will become clear shortly.)

Suppose I say: “Everyone should do X.” But I, myself, do not do X. This, clearly, is hypocrisy.

On the other hand, suppose I say: “People of group A should do X.” I, myself, am not in group A; and I do not do X. Is this hypocrisy? No, because my words do not imply that I should do X, and so my failure to do X does not consti... (read more)

5Said Achmiz
And here’s a counterpoint. I said that what I describe in the parent comment is… But is that quite true? It seems to me that it’s true only if you accept (or would accept, if it were put to you) the truth of the original, intended, proposition (of the form “People of group A should do X.”—along with its implicature, “People not of group A need not do X.”). Otherwise, consider the situation, seen from the viewpoint of someone to whom the “hypocrite” is speaking, and who does not (necessarily) accept that original proposition (nor its implicature): Someone—let’s call her Alice—is claiming that “everyone should do X”. But Alice, herself, is not doing X. Upon investigation, it becomes clear to you that what Alice in fact thinks is that “people of group A should do X”; and, further, that Alice does not consider herself to be in group A, but does consider you to be in group A. But this means that Alice considers herself to be better than you, in some sense! Can you really trust Alice’s recommendations, then? Furthermore: others will surely come to the same conclusions as you have. Is it acceptable to allow Alice to flout her proposed rule, given that this, by implication, is a signal—first, that Alice is not in group A (and therefore of higher status); and second, that Alice is exempt from the rules, without even needing an explicit exemption (and that, too, is status-increasing)? Alternatively: Someone—let’s call her Alice—is claiming that “everyone should do X”. But Alice, herself, is not doing X. Upon investigation, it becomes clear to you that what Alice in fact thinks is that “people of group A should do X”; and, further, that Alice does not consider herself to be in group A—nor, indeed, does she consider you to be of group A… but she did not share any of these considerations with you. (Why didn’t she? Well, because that is dangerous for her; to speak in an impolitic way, even to one who may be expected to sympathize with the sentiment, is risky.) What is Alice
2Dagon
The scrum example could easily be a "bad categorization" flinch, not directly about hypocrisy. It would apply even if the speaker used scrum and acknowledged that they were a bad programmer.

Thinking more about this, I'm still pretty sure I'm staying in the "boo hypocrisy" camp. But I'd like to understand the alternative you're proposing, and what effects you're looking to promote by removing the bias against it.

Do you want to see more hypocrisy than you're currently seeing? Are you not getting enough advice from people who aren't speaking from experience?

Alternatively, do you feel that you're being thwarted in giving advice, because it prevents you from advising on topics you haven't ... (read more)

2abramdemski
Ah, that, I wouldn't know, due to the problem of silent evidence. Maybe there's a lot of good advice going unsaid. Maybe not. I somewhat feel that I'm illegitimately stopping myself from saying hypocritical things, in certain situations. Mainly, I'm talking about the epistemics here. I think there's a quick response to hypocrisy, a visceral sense that it means a statement is less true, which is usually not well-founded. It systematically warps beliefs. So, I want to encourage people to notice that and adjust for it to the best of their ability. .... doesn't that sound a bit like a (reverse) applause light? A conversation halter? A semantic stopsign? That's how I feel about the way people use the anti-hypocrisy norm -- it's just a "boo", a tribal signal of who is outgroup. An automatic negative, putting horns on someone. Probably you have some more detailed view, and I shouldn't criticize you for stating it simply like that, but it did correspond to how I see it.
2Dagon
Hmm. It doesn't feel that from the inside - I boo myself and close friends when I notice them doing something boo-worthy (like giving advice that seems to apply generally but which they don't take themselves). Someone who I only ever boo and never yay is someone I don't want to hang around much, but that's different from "outgroup" as I understand it. Some very specific examples would help a lot - it may be that you're giving louder boos than I, and you're just arguing to tone them down a bit. I'd fully agree with that position. It may be that you're saying that hypocritical advice is precisely as trustworthy as experience-backed advice, and I'd disagree with that.

I think I'm talking about a different concept than you are talking about. Here's what I take to be hypocrisy that is probably/definitely bad:

When someone's brain is really good at selective remembering and selective forgetting, remembering things so they are convenient, and forgetting things that are inconvenient. And when the person is either unconsciously or only semi-consciously acting as an amplifier of opinions, sensing where a group is likely to go and then pushing (and often overshooting) the direction in order to be first to score p... (read more)

3abramdemski
Yeah, I agree that this is a coherent cluster, is pretty bad, and probably needs to be a named concept (unless making it into a named concept makes everything terrible for reasons related to the concerns you mention). I would be surprised to hear someone say this is the central meaning of "hypocrite", but here I am, surprised. So, it seems like there are four aspects here which you're clustering together: * Selectively remembering when you were right and not when you were wrong, or the degree to which you were right or wrong, and (perhaps implicitly) asking other people to remember this next time they doubt you. * Getting credibility by predicting which way the group will swing, in a way which doesn't actually add information to the system. * Cryptomnesia, remembering others' ideas as your own. * Self-deception / lack of introspection. (If the last item is not present, a person could be consciously implementing all of these strategies (ie, they don't actually have selective memory or cryptomnesia, but act like they do anyway).) I might add to the cluster: * Being obsessed with who gets credit for ideas. * Not building a global model, to a degree far beyond separate-magisteria style mental compartmentalization: like the students Feynman discusses in Surely You're Joking, Mr. Feynman!, they're operating like a chatbot: putting their effort into playing the social game of saying the right words, without seeming to consider that the words have meaning. Easily detected by asking questions which would not come up in the context of the social game they're accustomed to. (Again, with the understanding that everyone does these things to some degree.)

It seems to me that this habit is universal in American culture, and I'd be surprised (and intrigued!) to hear about any culture where it isn't.

I live in Austria. I would say we do have norms against hypocrisy, but your example with the drivers license seems absurd to me. I would be surprised (and intrigued!) if agreement with this one in particular is actually universal in American culture. In my experience, hypocrisy norms are for moral and crypto-moral topics.

For normies, morality is an imposition. Telling them of new moral requirements increases how mu... (read more)

2abramdemski
My current take is that anti-hypocrisy norms naturally emerge from micro status battles: giving advice naturally has a little undercurrent of "I'm smarter than you", and pointing out that the person is not following their own advice counters this. Therefore, a hypocrisy check naturally becomes a common response, because it's a pretty good move in status games. Therefore, people expect a hypocrisy check, and check themselves. On the one hand, I was probably blind to the moral aspect and over-generalized to some extent. On the other hand, do you really imagine me telling someone they should get a driver's license (in a context where there is common knowledge that I don't have one), and not expect a mild backlash? I expect phrases like "look who's talking" and I expect the 'energy in the room' after the backlash to be as if my point was refuted. I expect to have to reiterate the point, to show that I'm undeterred, if I still want it to be considered seriously in the conversation. (Particularly if the group isn't rationalists.) So your social experience is different in this respect?
2Bunthut
I've never experienced this example in particular, but I would not expect such a backlash. Can you think of another scenario with non-moral advice that I have likely experienced?
2abramdemski
Can you tell me anything about the "advice culture" you have experience with? For example, I've had some experience with Iranian culture, and it is very different from American culture. It's much more combative (in the sense of combat vs nurture, not necessarily real combativeness -- although I think they have a higher preference/tolerance for heated arguments as well). I was told several times that the bad thing about american culture is that if someone has a problem with you they won't tell you to your face, instead they'll still try to be nice. I sometimes found the blunt advice (criticism) from Iranians overwhelming and emotionally difficult to handle.
1Bunthut
I don't strongly relate to any of these descriptions. I can say that I don't feel like I have to pretend advice from equals is more helpful than it is, which I suppose means its not face. The most common way to reject advice is a comment like "eh, whatever" and ignoring it. Some nerds get really mad at this and seem to demand intellectual debate. This is not well received. Most people give advice with the expectation of intellectual debate only on crypto-moral topics (this is also not well received generally, but the speaker seems to accept that as an "identity cost"), or not at all.
2abramdemski
Diet advice?
2Bunthut
You mean advice to diet, or "technical" advice once its established that person wants to diet? I don't have experience with either, but the first is definitely crypto-moral.
2abramdemski
What's definitely not crypto-moral?
2Bunthut
My father playing golf with me today, telling me to lean down more to stop them going out left so much.
2abramdemski
Ok. My mental sim doesn't expect any backlash in this type of situation. My first thought is it's just super obvious why the advice might apply to you and not to him; but, this doesn't really seem correct. For one thing, it might not be super obvious. For another, I think there are cases where it's pretty obvious, but I nonetheless anticipate a backlash. So I'm not sure what's going on with my mental sim. Maybe I just have a super-broad 'crypto-moral detector' that goes off way more often than yours (w/o explicitly labeling things as crypto-moral for me).
1Bunthut
Maybe. How were your intuitions before you encountered LW? If you already had a hypocrisy intuition, then trying to internalize the rationalist perspective might have lead it to ignore the morality-distinction.

I feel I flinch away from hypocrisy because allowing it seems to nudge us towards world states that I find undesirable. Consider a malicious version of hypocrisy through the lens of the diner's dilemma: Transitioning meat-eaters reluctantly order tofu salads, while the vocal vegan gets themselves a steak. I imagine that in a subsequent outing, at least some of the carnivores break their resolve, seeing their duplicitous comrade tuck into a bucket of chicken wings. Eventually, no one cares to take the signalling action; preferably, though, perhaps they... (read more)

It feels like a red herring to focus on anti-hypocrisy norms. As you mentioned, the current norms are something like, "Hypocrisy is bad, don't trust hypocrites, and you can publicly call someone out on being a hypocrite".

Someone being hypocritical should not be a make or break data-point that effects whether or not I believe them. It should effect how much epistemic effort I put into the interaction. Detecting hypocrisy should be a signal of, "It's possible something fishy could be going on, so I'm going to bring more mental r... (read more)

2abramdemski
Yeah, I think it may be a red herring. The argument I really wanted to make was focused on the epistemic aspect of the anti-hypocrisy flinch. I brought in a norm debate without thinking enough about the distinction.
[-]rk30

This post made me update moderately towards comfort with hypocrisy.

That said, I think it's important that advice cannot be merely completed, but the advice-receiver will also have akratic issues etc. If you haven't got a license, the amount of value you've gotten from having a license is zero. If you have made any attempts to get one (maintaining it as a todo, some first driving lessons), you're in the red as far as the getting a license project goes.

Without highlighting distinguishing features between the advice-giver and advice-receiver, I think it's a r... (read more)

2abramdemski
Yeah, it is some evidence. In my case, so many people get a license successfully that it would drown out the small amount of evidence provided by my individual case.... except that I usually end up talking to people somewhat similar to myself, and in particular, an awful lot of them seem not to have a license! I don't think I should get a license that strongly, though. I mean, I do think I should get a license. But, it doesn't seem very important.

By the way, I don't really discuss hufflepuff cynicism in the post itself, and I was a little conflicted about whether to put it in the title. However, I do think the view here is quite central to hufflepuff cynicism. Hufflepuff cynicism is about not holding people to their stated ideals. It's almost the same thing as not being down on hypocrisy!

I'm far closer to thinking hufflepuff cynicism is bad, though, than I am to thinking this view on hypocrisy is wrong. Hufflepuff cynicism can block corrective measures in a community stuck in a bad e... (read more)

I like when the advisor has lived or done the thing advised. That means the advisor has paid for it - be it with time, money, effort, feeling or most expensively, people. This doesn't tell me that I personally will benefit from doing the same thing, it just tells me the thing was once judged to be worth it. Dishonesty adds that this was a mistake.

3abramdemski
Sure, but, I claim it's pretty common to over-use this heuristic.