by Elo
3 min read26th Aug 201612 comments

21

Original post:  http://bearlamp.com.au/hedging/

Hedging.

https://en.wikipedia.org/wiki/Hedge_%28linguistics%29

Examples:

  • Men are evil 
  • All men are evil 
  • Some men are evil
  • most men are evil
  • many men are evil
  • I think men are evil
  • I think all men are evil
  • I think some men are evil 
  • I think most men are evil

"I think" weakens your relationship or belief in the idea, hedges that I usually encourage are the some|most type. It weakens your strength of idea but does not reduce the confidence of it.

  • I 100% believe this happens 80% or more of the time (most men are evil) 
    Or 
  • I 75% believe that this happens 100% of the time (I think all men are evil) 
    Or
  • I 75% believe this happens 20% of the time (I think that some men are evil) 
    Or 
  • I 100% believe that this happens 20% of the time (some men are evil)
    Or
  • I (Reader Interprets)% believe that this happens (Reader Interprets)% of the time (I think men are evil) 

They are all hedges.  I only like some of them.  When you hedge - I recommend using the type that doesn't detract from the projected belief but instead detracts from the expected effect on the world.  Which is to say - be confident of weak effects, rather than unconfident of strong effects.

This relates to filters in that some people will automatically add the "This person thinks..." filter to any incoming information.  It's not good or bad if you do/don't filter, just a fact about your lens of the world.  If you don't have this filter in place, you might find yourself personally attached to your words while other's remain detached from words that seem like they should be more personally attached to.  This filter might explain the difference.  

This also relates to Personhood and the way we trust incoming information from some sources.   When we are very young we go through a period of trusting anything said to us, and at some point experience failures when we do trust.  We also discover lying, and any parent will be able to tell you of the genuine childish glee when their children realise they can lie.  These experiences shape us into adults.  We have to trust some sources, we don't have enough time to be sceptical of all knowledge ever and sometimes we outsource to proven credentialed professionals i.e. doctors.  Sometimes those professionals get it wrong.

This also relates to in-groups and out-groups because listeners who believe they are in your in-group are likely to interpret ambiguous hedges in a neutral to positive direction and listeners who believe they are in the out-group of the message are likely to interpret your ambiguous hedges in a neutral or negative direction.  Which is to say that people who already agree that All men are evil, are likely to "know what you mean" when you say, "all men are evil" and people who don't agree that all men are evil will read a whole pile of "how wrong could you be" into the statement, "all men are evil".


Communication is hard.  I know no one is going to argue with my example because I already covered that in an earlier post.


Meta: this took 1.5hrs to write.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 11:48 PM

As a matter of writing style, excessive use of hedging makes your writing harder to read. It's better to hedge once at the beginning of a paragraph and then state the following claims directly, or to hedge explicitly at the top of your article. At SlateStarCodex Scott sometimes puts explicit "Epistemic Status" claims at the top of the article (I first saw this at another site in the LW sphere quite a few years ago, but I can't remember where, and I'm glad to have seen it spread).

I am definitely guilty of excessive hedging when I write comments or essays, and I always have to go back and edit out "I think" and "it seems" from half my sentences.

I first saw this at another site in the LW sphere quite a few years ago, but I can't remember where, and I'm glad to have seen it spread

I stole it from muflax's since-deleted site (who AFAIK invented it), and I think SSC borrowed it from me.

Yes, Muflax's site is the one I was thinking of. Sad that they deleted it, it had some very good articles on it as I recall.

What was the URL? Is it in the Internet Archive?

While we're at it, any other good blogs that are only available to read through the Internet Archive? Gabriel Weinberg's old blog is the only one that comes to my mind.

The original site was muflax.com, but it's robots.txt disallows the Internet archive. Someone has recovered some of the blog posts and they are posted here. There are also a number of articles at archive.is that have been captured at a later date, which actually show the epistemic status markers I was talking about, described here

Which is to say - be confident of weak effects, rather than unconfident of strong effects.

This suggestion feels incredibly icky to me, and I think I know why.

Claims hedged with "some/most/many" tend to be both higher status and meaner than claims hedged with "I think" when "some/most/many" and "I think" are fully interchangeable. Not hedging claims at all is even meaner and even higher status than hedging with "some/most/many". This is especially true with claims that are likely to be disputed, claims that are likely to trigger someone, etc.

Making sufficiently bold statements without hedging appropriately (and many similar behaviors) can result in tragedy of the commons-like scenarios in which people grab status in ways that make others feel uncomfortable. Most of the social groups I've been involved in allow some zero-sum status seeking, but punish these sorts of negative-sum status grabs via e.g. weak forms of ostracization.

Of course, if the number of people in a group who play negative-sum social games passes a certain point, this can de facto force more cooperative members out of the group via e.g. unpleasantness. Note that this can happen in the absence of ill will, especially if group members aren't socially aware that most people view certain behaviors as being negative sum.

For groups that care much more about efficient communication than pleasantness, and groups made up of people who don't view behaviors like not hedging bold statements as being hurtful, the sort of policy I'm weakly hinting at adopting above would be suboptimal, and a potential waste of everyone's time and energy.

It seems like hedging is the sort of thing which tends to make the writer sound more educated and intelligent, if possibly more pretentious.

It matters a lot who your audience is, and what are your goals in a specific interaction. Fluttershy's points about status-signaling are a great example of ways that precision can be at odds with effectiveness.

Also, you're probably wrong in most of your frequency estimates. Section III of this SlateStarCodex post helps explain why - you live in a bubble, and your experiences are not representative of most of humanity.

Unless you're prepared to explain your reference set (20% of what exactly?) cite sources for your measures, it's worth acknowledging that you don't know what you're talking about, and perhaps just not talking about it.

Rather than caveat-ing or specifying your degree in belief about percentage and definition of of evil men, just don't bother. Walk away from conversations that draw you into useless generalizations.

In other words, your example is mind-killing to start with. No communication techniques or caveats can make a discussion of how much you believe what percentage of men are evil work well. And I suspect that if you pick non-politically-charged examples, you'll find that the needed precision is already part of the discussion.

I don't know how I would integrate this with my programming work, where it is VERY important my inner voice differentiates between "I know" and "I think" and "It seems like" - were I to use more factual statements, I'd go wrong faster and end up taking longer to debug things...

Hedging your internal voice is not a good idea. Likely to lead to confusion. When you hedge (or don't), you already know what you mean. Other people don't. It's a communication barrier, not one I would tackle inside your head.

This sounds very a priori, like you noticed that people sometimes misinterpret and tried to figure out how without paying attention to the specific ways in which they actually do. I recommend Robin Hanson, although I think that post is way too much in favor of disclaimers.