In Techniques of the Selling Writer, Dwight W. Swain gives advice on receiving advice:

George Abercroft is an action writer. "Start with a fight!" is his motto. And for him, it works.

But Fred Friggenheimer's witch-cult yarn, as he conceives it, puts heavy emphasis on atmosphere. The fight he tries to stick in like a clove in a ham at the beginning, following George's rule, destroys the mood - and the story.

Even with your own rules, indeed, you must be careful. Because somehow, subtly, they may not apply to this explicit situation. [...]

How do you tell whether a rule is good or not, in terms of a specific problem?

Answer: Find out the reason the rule came into being. What idea or principle stands behind it? [...]

Take George's rule about starting every story with a fight. It's born of George's markets - men's magazines in which the emphasis is on fast, violent action, with blood on page one an absolute must.

If Fred only realized that fact, he'd ignore George's rule when he himself writes a mood-geared story.

One way to reduce damage done by cached thoughts is to cultivate a habit of asking questions about the origin of the thought. Do you remember where you heard the thought? Did it come from someone practicing good epistemic hygiene, or do they just unthinkingly pass on anything they hear? If somebody offered advice based on their own experiences, how representative is their experience? What kinds of experiences have they had that prompted that advice? Are there alternative ways of interpreting those experiences? Or if you're the one offering advice, which you came up with yourself, what situation led you to come up with it? How generalizable is it?

So far I have mostly been framing this as a way to notice flaws in seemingly good advice. But there's also an opposite angle: finding gems in seemingly worthless information.

All outcomes are correlated with causes; most statements are evidence of something. Michael Vassar once gave the example of a tribe of people who thought that faeries existed, lived in a nearby forest, and you could see them once you became old enough. It later turned out that the tribe had a hereditary eye disease which caused them to see things from the corners of their eyes once they got old. The tribe's theory of what was going on was wrong, but it was still based on some true data about the real world. A scientifically minded person could have figured out what was going on, by being sufficiently curious about the data that generated that belief.

If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse. -- Black Belt Bayesian

Some people tend to stop reading a text whenever they come across blatantly incorrect statements. I mind much less. Yes, the person may be generally mistaken, but they may still have some worthwhile points mixed in. Folk theories can be useful, even when they're entirely wrong. What you're reading is somebody's interpretation of an event, which provides information about the event even if the interpretation is wrong. Can you come up with a better interpretation?

Maybe you disagree with something that I've said here? In that case, what data do you think generated this advice? What conclusions would you derive instead?

New Comment
32 comments, sorted by Click to highlight new comments since:

Sometimes I realize mid-assertion that the thing I'm saying isn't really to be trusted as much as my listener will be likely to trust it without further details. I usually compensate by making a weak joke: "...and I know that's true because I read it on the Internet, and as we all know, decent web design signals infallibility!", or "So I am told by my friend X, and gosh, she's always been a Libertarian, so I have no reason at all to be suspicious of her statistical claims about gun violence."

Ironically, I have the sense that this generally works to lessen the other person's skepticism. (The mechanism is obvious- it takes the disagreement out of the class of status battles, thus enabling actual consideration of the point.)

I really like this method! It certainly seems to be far more useful than my current "wait, I just realized that I don't know as much about this subject as I think I do, and therefore I need to stop talking", which really really hurts the flow of conversation. Thanks!

Some people tend to stop reading a text whenever they come across blatantly incorrect statements. I mind much less. Yes, the person may be generally mistaken, but they may still have some worthwhile points mixed in.

I try to stop reading whenever I come across sufficiently strong evidence that finishing it will provide less insight per unit time than a cutoff value, which is adjusted based on the length of my reading queue and the marginal value of my time. Blatantly incorrect statements are evidence of this, and if an article says sufficiently wrong things before it offers any novel positions or insights, I do stop reading. As a defense against confirmation bias, I penalize incorrect arguments much more strongly if they argue for positions I already agree with.

I have a mental image of your reading algorithm as Guitar Hero, where an epic solo can correct for some missed notes, but not too many, and you only get so much room to screw up at the start.

I started reading Eckhart Tolle recently (thinking that it would be interesting to understand what sort of instructions/mantras/comforting-claims it is that people find so valuable). Nonsensical and over-broad claims abound.

I enjoy your idea of punishing faulty confirmation of already-held beliefs. It's good that you're not absolutely rigid in this, because few people are completely careful in their rhetoric when most incentives favor bold, flashy style.

[-][anonymous]110

The "Black Belt Bayesian" website appears to have been hijacked into offering malware.

Michael Vassar once gave the example of a tribe of people who thought that faeries existed

Cute story; citation needed.

No idea about the tribe, but the rest sounds like http://en.wikipedia.org/wiki/Charles_Bonnet_syndrome

Yep, that plus some misremembered details in the post above.

One reason not to read things that contain false statements is that it is hard for us to remember that what we read is false even if it explicitly labeled as false.

This may be one more reason why thinking in terms of probability estimates -- which, at least in my mind, are spatially represented and color-coded -- is a good habit to get into.

This may be one more reason why thinking in terms of probability estimates -- which, at least in my mind, are spatially represented and color-coded -- is a good habit to get into.

Hey... how did you develop the whole color coded spacial representation thing? I tend to do that sort of visualization with computers but not in my brain. My brain just goes by feelings and intuition.

Hey... how did you develop the whole color coded spacial representation thing?

I'm not sure; my best guess is that it goes back to some childhood memory of letters/numbers represented in certain ways on various educational toys and the like.

In a similar manner, countries other than the U.S., and U.S. states, have associated colors in my mind, due I believe to a toy globe I had at a very young age. (Interestingly, however, I also have a memory of finding that globe again years later, and discovering that some of the colors were different from what I "remembered"!)

[-][anonymous]20

True. However, I think it's possible for you to develop the ability to realize that something you said or thought is "something I seem to recall having heard somewhere". It might be helpful to engage in contentious debate so that you internalize an expectation of, "oh hell, I'm going to get challenged on that and I'm not sure where I picked it up."

In one study mentioned in the article, people thought that the information they remembered was from the CDC -- they just forgot whether the information was true or false. The problem is not that just that we forget where we learned something, but that we forget that it is false.

[-][anonymous]00

That's within my point. I was using synecdoche to refer to a larger category of possible problems for which I have no nice description. Maybe it would help if I added "and things of that sort".

I bet you could train yourself to be good at remembering "I heard negative evidence against X (from whatever source)" properly, especially if X is something you've either got existing (properly remembered or summarized) evidence for/exist,or have connected to other claims. In other words, probably part of that effect is that the subjects don't accurately understand or recall the sentence they read, and they think "that sounds familiar! wasn't that what I just read from the CDC?"

An inability to remember the strength of some evidence you've heard is already crippling. Misremembering the polarity (whether it's pulling you toward truth or untruth from your prior) is just a particularly bad instance.

What do people with this handicap actually do when they want to properly weigh evidence? Do they write it all down so they can review it (like people find a pros/cons list to be helpful)?

I often remember how some fact or event made me feel at the time. For instance, I'll remember being moved by a film years later, but perhaps be quite fuzzy on even the broad strokes of the plot. I'd like to exploit this sort of memory in order to represent the direction+strength of evidence - to not remember being excited to read some study, but to remember its value.

Another technique that seems useful for uncertain (but interesting or important) claims that are updated over a long period of time is using fixed nametags (not much more complicated than the title of this excellent post 'What data generated that thought?'), especially in writing or talking about it.

Maybe you disagree with something that I've said here? In that case, what data do you think generated this advice? What conclusions would you derive instead?

A little bit hokey as an ending, but I upvoted the post nevertheless.

I loved it. An effective call to action: "hey, actually think about this, dummy!" :)

My response to the question (which I wouldn't have without the question's prodding): these thoughts came from reading and interacting with well-recommended people, who probably offer some value even if it's just in understanding where they went wrong, who are likely to have an instructive reason for what they say.

When it comes to the 90-something-%-crap of the world, it's definitely necessary to avoid caring or thinking about most of it except in the broadest strokes.

Maybe you disagree with something that I've said here? In that case, what data do you think generated this advice? What conclusions would you derive instead?

I disagree with your claim that obviously wrong information is still worth reading because it gives you clues into the author's thoughts and the evidence behind them.

This is kinda obvious, but I think that prior experience with successful from following this principle generated the advice. That, and possibly an overestimation of its useful due to the fact that it's counterintuitive- evidence for it could cause you either to overcorrect, or you may be more likely to remember the times when its correct, since those would probably be more memorable. (Alternately, you may be implicitly referring only to reasonably OK writing, or to descriptions of physical events, in which I'd be more equivocal.)

I'd say that bad interpretations are, in general, not worth reading.

  • Most incorrect interpretations tend to be very similar; once you've, e.g. read one explanation as to why Obama is a Muslim, there's probably very little more to be gained from reading more. This applies to less wrong, or even correct reasoning, as well- if you understood the first, there's probably relatively little to be gained from reading two textbooks covering the same material.

  • There's no reason to assume that the argument will, in fact, be an interpretation of an event, or, even if it is, that the description will be accurate. Even ignoring, e.g. post-modernist tracts, many accounts involve just making things up. e.g. I ignore anything from the Discovery Institute. (Which would tell me what? Something about what they think they want their readers to know? That's not useful to me, and I could probably make equally good predictions just by introspection.)

  • Any time you spend reading one thing is time not spent reading something else; just because the account provides a little useful information isn't a good reason to read it.

For me, I tend to apply this sort of reasoning when I'm first encountering an author. If I read blatantly false statements from someone who I have no knowledge of, I've noticed that I'm very likely to put the book/article aside. If I have any experience with the author, however, I've noticed that I read sections that I disagree with very carefully, often several times.

I suspect that I'm applying the halo effect to the articles from authors I like, and anything I dislike becomes jarring and therefore much more interesting. It's been beneficial, though. I feel like I've learned much more from passages I disagree with, but this could also be from having spent more time on them than other sections. Does anyone with speed reading/material retention experience notice the same effect?

(Alternately, you may be implicitly referring only to reasonably OK writing, or to descriptions of physical events, in which I'd be more equivocal.)

I was referring more to reasonably OK writing. Obviously one needs some filter for which texts they read.

is applies to less wrong, or even correct reasoning, as well- if you understood the first, there's probably relatively little to be gained from reading two textbooks covering the same material.

At least in math, the method of proofs used and approach to the same thing can be different, and quite revealing. Reading the same material with even just different presentations can help one understand what the main ideas are.

I agree; I'm assuming here that you understood the first textbook well enough that the second one is of much less use.

Find out the reason the rule came into being.

Realistically, we have to use a huge amount of heuristics and precomputed rules without consciously thinking about the entire data set from which they were created. This is true even when we're already aware of the data set.

The human mind uses cached thoughts a lot, and fearing cache poisoning isn't the same as not using a cache at all, because caching is an indispensable optimization technique.

Which leads to the interesting recursive question: how do you generate a rule which tells you when to re-check other rules? It may feel satisfying to question your cached ideas now and then, but e.g. deciding randomly when to do this and when not to may well be worse than the "default" human behavior.

The architecture of the human mind probably already includes mechanisms that act like such a rule, and they are probably nontrivial to override just by deciding to. Do we understand them well enough to design a better when-to-recheck rule, a better compromise between the givens of the human mind and something impractical like a provably-optimal Bayesian belief network?

I don't have any hard-and-fast rules for when to re-check cached thoughts (and other cached habit patterns), but if I notice that the last time I checked a cached thought was when I was 7 years old I will be sure to check it very* closely.

*I can tell because the associated mental imagery or what I was seeing or doing when last updating the cached thought (like playing on a swing set) gives it away.

This was my philosophy for a long time. The fact of someone believing something is evidence. It's not evidence necessarily that the thing they believe is true; it may not say anything about the thing they believe; but it is evidence about their mind and about the events in the world that affected their mind. Belief is a cognitive event; it isn't outside of cause-and-effect.

This was my philosophy for a long time.

Past-tense?

Used to be more so, because I had a huge problem in seeing anyone else as wrong, so I had to twist my mind in order to make their input "true" in some sense even if not in a meaningful sense. You could say my philosophy used to be the strong version of this and is now a weaker version...

[-]Lila40

I had a huge problem in seeing anyone else as wrong

Wow, that is fascinating, sort of like a gory wound is fascinating. I wish I could peer inside an attitude like that to examine it.