Theoretically, my 'truth' function, the amount of evidence I need to cache something as 'probably true and reliable' should be a constant. I find, however, that it isn't. I read a large amount of scientific literature every day, and only have time to investigate a scant amount of it in practice. So, typically I rely upon science reporting that I've found to be accurate in the past, and only investigate the few things that have direct relevance to work I am doing (or may end up doing).
Today I noticed something about my habits. I saw an article on how string theory was making testable predictions in the realm of condensed matter physics, and specifically about room-temperature superconductors. While a pet interest of mine, this is not an area that I'm ever likely to be working in, but the article seemed sound and so I decided it was an interesting fact, and moved on, not even realizing that I had cached it as probably true.
A few minutes later it occurred to me that some of my friends might also be interested in the article. I have a Google RSS feed that I use to republish occasional articles that I think are worth reading. I have a known readership of all of 2. Suddenly, I discovered that what I had been willing to accept as 'probably true' on my own behalf was no longer good enough. Now I wanted to look at the original paper itself, and to see if I could find any learnéd refutations or comments.
This seems to be because my reputation was now, however tangentially, "on the line" since I have a reputation in my circle of friends as the science geek and would not want to damage it by steering someone wrong. Now, clearly this is wrong headed. My theory of truth should be my theory of truth, period.
One could argue, I suppose, that information that I store internally can only affect my own behavior while information that I disseminate can affect the behaviour of an arbitrarily large group of people, and so a more stringent standard should apply to things I tell others. In fact that was the first justification that sprang to mind when I noticed my double standard.
Its a bogus argument though, as none of my friends are likely to repeat the article or post it in their blogs and so the dissemination has only a tiny probability of propagating by that route. However, once its in my head and I'm treating it as true, I'm very likely to trot it out as an interesting fact when I'm talking at Science Fiction conventions or to groups of interested geeks. If anything, the standard for my believing something should be more stringent than my standard for repeating it, not the other way around.
But, the title of this post is "Harnessing Your Biases" and it seems to me that if I am going to have this strange predisposition to check more carefully if I am going to publish something, then maybe I need to set up a blog of things I have read that I think are true. It can just be an edited feed of my RSS stream, since this is simple to put together. Then I may find myself being more careful in what I accept as true. The mere fact that I have the feed and that its public (although I doubt that anyone would, in fact, read it), would make me more careful. Its even possible that it will contain very few articles as I would find I don't have time to investigate interesting claims well enough to declare them true, but this will have the positive side effect that I won't go around caching them internally as true either.
I think that, in many ways, this is why, in the software field, code reviews are universally touted as an extraordinarily cheap and efficient way of improving code design and documentation while decreasing bugs, and yet is very hard to get put into practice. The idea is that after you've written any piece of code, you give it to a coworker to critique before you put it in the code base. If they find too many things to complain about, it goes back for revision before being given to yet another coworker to check. This continues until its deemed acceptable.
In practice, the quality of work goes way up and the speed of raw production goes down marginally. The end result is code that needs far less debugging and so the number of working lines of code produced per day goes way up. I think this is because programmers in such a regime quickly find that the testing and documenting that they think is 'good enough' when their work is not going to be immediately reviewed is far less than the testing and documenting they do when they know they have to hand it to a coworker to criticize. The downside, of course, is that they are now opening themselves up for criticism on a daily basis, and this is something that few folks enjoy no matter how good it is for them, and so the practice continues to be quite rare due to programmer resistance to the idea.
This appears to be two different ways in which to harness the bias that folks have to do better (or more careful) work when it is going to be examined, to achieve better results. Can anyone else here think of other biases that can be exploited in useful ways to leverage greater productivity or reliability in projects?
I'm not sure this really counts as a bias. It seems quite rational, unless you will actually suffer immediate and significant consequences if you are wrong about string theory.
The cost of privately held beliefs (especially about abstract truths) is quite low. If I believe the earth is a flat disk on the back of an infinite stack of tortoises, and if I'm, say, a car mechanic, I will not suffer at all for this belief. Unless of course, I mention it to my friends, because they will judge me for it, unless of course they all believe the same thing, in which case I'll probably proclaim this belief loudly and often and possibly meet up with them to discuss it on an appointed day of the week. I may suffer because I fail at epistemology, but it doesn't seem clear how trusting the wrong source on one marginal occasion will corrupt my epistemology (doubly so if I'm refined enough to have a concept of my own epistemology). Taking epistemology as exogenous, there's really no cost to a marginal false belief (that does not affect me directly).
Having a false belief about some fact that has no direct bearing on your life is way, way, way cheaper than publicly expressing belief in such a fact and being refuted. There seems to be nothing irrational about investing more energy fact-checking in the latter scenario.
Edit: Two things.
First, the turtles example was poorly chosen, as it blurs the line between belief and epistemology too much. Better examples would include, say, wrongly believing celebrity gossip, or having incorrect beliefs about unpractical science due to a lack of information or a misunderstanding of alternatives. If the car mechanic believed Newton was totally right (because he hadn't seen evidence to the contrary), this would be a very, very low cost false belief. Interestingly, "Barack Obama is a Muslim" probably falls under this category, though it blurs the epistemological line a bit more.
Second, it's also quite possible that you care more about misleading others than you do being wrong. It's easy enough to change your mind if you see contradictory evidence or reread the article and understand it better. It's rather harder to change other people's minds who have been convinced, and you'll feel like you've let them down as an authority, since they trusted you.
I'm not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn't causally entangled with anything our mechanic cares about, that doesn't get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.
If our car ... (read more)