Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The ethic of hand-washing and community epistemic practice

44 Post author: AnnaSalamon 05 March 2009 04:28AM

by Steve Rayhawk and Anna Salamon.  (Joint authorship; there's currently no way to notate that in the Reddit code base.)

Related to: Use the Native Architecture

When cholera moves through countries with poor drinking water sanitation, it apparently becomes more virulent. When it moves through countries that have clean drinking water (more exactly, countries that reliably keep fecal matter out of the drinking water), it becomes less virulent. The theory is that cholera faces a tradeoff between rapidly copying within its human host (so that it has more copies to spread) and keeping its host well enough to wander around infecting others. If person-to-person transmission is cholera’s only means of spreading, it will evolve to keep its host well enough to spread it. If it can instead spread through the drinking water (and thus spread even from hosts who are too ill to go out), it will evolve toward increased lethality. (Critics here.)

I’m stealing this line of thinking from my friend Jennifer Rodriguez-Mueller, but: I’m curious whether anyone’s gotten analogous results for the progress and mutation of ideas, among communities with different communication media and/or different habits for deciding which ideas to adopt and pass on. Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children? Do mass media such as radio, TV, newspapers, or printing presses decrease the functionality of the average person’s ideas, by allowing ideas to spread in a manner that is less dependent on their average host’s prestige and influence? (The intuition here is that prestige and influence might be positively correlated with the functionality of the host’s ideas, at least in some domains, while the contingencies determining whether an idea spreads through mass media instruments might have less to do with functionality.)

Extending this analogy -- most of us were taught as children to wash our hands. We were given the rationale, not only of keeping ourselves from getting sick, but also of making sure we don’t infect others. There’s an ethic of sanitariness that draws from the ethic of being good community members.

Suppose we likewise imagine that each of us contain a variety of beliefs, some well-founded and some not. Can we make an ethic of “epistemic hygiene” to describe practices that will selectively cause our more accurate beliefs to spread, and cause our less accurate beliefs to stay con tained, even in cases where the individuals spreading those beliefs don’t know which is which? That is: (1) is there a set of simple, accessible practices (analogous to hand-washing) that will help good ideas spread and bad ideas stay contained; and (2) is there a nice set of metaphors and moral intuitions that can keep the practices alive in a community? Do we have such an ethic already, on OB or in intellectual circles more generally? (Also, (3) we would like some other term besides “epistemic hygiene” that would be less Orwellian and/or harder to abuse -- any suggestions? Another wording we’ve heard is “good cognitive citizenship”, which sounds relatively less prone to abuse.)

Honesty is an obvious candidate practice, and honesty has much support from human moral intuitions. But “honesty” is too vague to pinpoint the part that’s actually useful. Being honest about one’s evidence and about the actual causes of one’s beliefs is valuable for distinguishing accurate from mistaken beliefs. However, a habit of focussing attention on evidence and on the actual causes of one’s own as well as one’s interlocutor’s beliefs would be just as valuable, and such a practice is not part of the traditional requirements of “honesty”. Meanwhile, I see little reason to expect a socially-endorsed practice of “honesty” about one’s “sincere” but carelessly assembled opinions (about politics, religion, the neighbors’ character, or anything else) to selectively promote accurate ideas.

Another candidate practice is the practice of only passing on ideas one has oneself verified from empirical evidence (as in the ethic of traditional rationality, where arguments from authority are banned, and one attains virtue by checking everything for oneself). This practice sounds plausibly useful against group failure modes where bad ideas are kept in play, and passed on, in large part because so many others believe the idea (e.g. religious beliefs, or the persistence of Aristotelian physics in medieval scholasticism; this is the motivation for the scholarly norm of citing primary literature such as historical documents or original published experiments). But limiting individuals’ sharing to the (tiny) set of beliefs they can themselves check sounds extremely costly. Rolf Nelson’s suggestion that we find words to explicitly separate “individual impressions” (impressions based only on evidence we’ve ourselves verified) from “beliefs” (which include evidence from others’ impressions) sounds promising as a means of avoiding circular evidence while also benefiting from others’ evidence. I’m curious how many here are habitually distinguishing impressions from beliefs. (I am. I find it useful.)

Are there other natural ideas? Perhaps social norms that accord status for reasoned opinion-change in the face of new good evidence, rather than norms that dock status from the “losers” of debates? Or social norms that take care to leave one’s interlocutor a line of retreat in all directions -- to take care to avoid setting up consistency and commitment pressures that might wedge them toward either your ideas or their own? (I’ve never seen this strategy implemented as a community norm. Some people conscientiously avoid “rhetorical tricks” or “sales techniques” for getting their interlocutor to adopt their ideas; but I’ve never seen a social norm of carefully preventing one’s interlocutor from having status- or consistency pressures toward entrenchedly keeping their own pre-existing ideas.) These norms strike me as plausibly helpful, if we could manage to implement them. However, they appear difficult to integrate with human instincts and moral intuitions around purity and hand-washing, whereas honesty and empiricism fit comparatively well into human purity intuitions. Perhaps this is why these social norms are much less practiced.

In any case:

(1) Are ethics of “epistemic hygiene”, and of the community impact of one’s speech practices, worth pursuing? Are they already in place? Are there alternative moral frames that one might pursue instead? Are human instincts around purity too dangerously powerful and inflexible for sustainable use in community epistemic practice?

(2) What community practices do you actually find useful, for creating community structures where accurate ideas are selectively promoted?

Comments (33)

Comment author: CarlShulman 05 March 2009 07:30:25AM 8 points [-]

Norms to protect against consistency and commitment pressures would be very valuable. One possible mechanism would be to make public 'Red Team' analyses: designate a forum where you will present the strongest case you can against one of your favored ideas, along these lines:

http://www.overcomingbias.com/2007/07/introducing-ram.html

This could be improved with rewards for success, which the speaker could provide herself using a mechanism like http://www.stickk.com/

With respect to religion, here's some support for the vertical versus horizontal spread idea:

Catholicism-celibate priests, early spread by evangelization. Buddhism-celibate monks, early spread by evangelization. Islam-polygamy for believers, early spread by evangelization and violence, with capture of women for followers. Judaism-priests and rabbis marry, tribal religion Hinduism-contains vast diversity, but religious leaders have generally married, generally the religion is inherited and does not seek converts

Comment author: AnnaSalamon 06 March 2009 02:33:34AM *  2 points [-]

Carl, that sounds like it could be really useful for increasing the rate of alternate idea-generation and of idea-shift.

Comment deleted 08 March 2009 12:21:30AM [-]
Comment deleted 08 March 2009 12:35:41AM [-]
Comment author: Johnicholas 05 March 2009 02:49:12PM *  1 point [-]

I am concerned that "taking sides", even self-consciously taking the "opposite" side, might lead to polarization and emotional attachment to factual beliefs.

However, I agree that the idea of red-teaming is interesting and should be tried, as part of an effort to develop some rationalist community best practices.

Comment author: CarlShulman 05 March 2009 06:14:31PM 3 points [-]

Yes this is a good point, one that Hopefully Anonymous correctly raises frequently. Rather, one should defend a point of view one rejects or has not considered, not specifically the reversal of one's current view.

Comment author: Andy_McKenzie 05 March 2009 05:10:45PM 6 points [-]

Jack: The idea of having citations everywhere is nice but unpragmatic. It would slow down conversation and dialogue tremendously.

One possible alternative is to have nested dialogues. Each sentence that makes some sort of claim links to another which explains the idea more thoroughly if that is what you disagree with. If you do not disagree with that point, then you can continue reading the main chain. This is similar to the idea of hypertext dialogue: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.3246 , and it is similar to what Eliezer has done at OB by being so self-referential.

Comment author: mark_spottswood 05 March 2009 10:55:28PM 2 points [-]

I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.

I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?

Comment author: JenniferRM 09 March 2009 04:44:00AM 5 points [-]

Two points...

POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is "Vibrio cholerae".

http://www.textbookofbacteriology.net/cholera.html

It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have received them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.

http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm

POINT TWO: It is probably worth keeping in mind the fundamental attribution error.

http://www.jstor.org/pss/4545312

This point is mostly in response to the focus here on habits and norms. Not to say that someone couldn't work on those productively, but I suspect environmental effects like "mere proximity" have a lot more influence than would be assumed without consciously factoring it in, even over people in this community. The cholera example comes bundled with the same "context focused" message: the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.

I can't imagine that this crowd is unaware of this sort of thing, but I'm not aware of a better example of the "location location location" message this Google's results from studying their internal betting markets.

http://googleblog.blogspot.com/2008/01/flow-of-information-at-googleplex.html

For myself, I tend to assume that if changes to my habits are to have any significant influence on me, many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I'm still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarrassing to list and would take a lot of text to describe any in useful detail :-P

For lack of such text I'll recommend "Lady of Mazes" for it's exploration of themes around technology, "spacial nearness", social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessible than "Accelerando" but, for me, it has had much more staying power.

http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190

http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841

Comment author: mark_spottswood 05 March 2009 07:35:47PM 5 points [-]

Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to "sterilize" the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.

Comment author: xamdam 26 March 2010 10:33:33AM *  4 points [-]

A couple of things that I am aware of in a religious community context (orthodox judaism). Of course in this case they were 'adopted' due to religious duty, and followed with intermittent success, but still pretty good ideas, especially coming from a couple of millenia ago.

De-biasing decisionmaking in legal context. 1) bribes are forbidden 2) family relationship disqualify the court 2) family relationships disqualify the witnesses 3) someone who 'lacks compassion' is disqualified from judging capital cases, specifically someone who does not have children is disqualified from capital cases 4) people with criminal record are disqualified from being a witness 5) people who do not contribute to the world (incl. 'one who makes a living from gambling' (occasional gambling is ok) - so much for a lot of our finance industry) are disqualified

Avoiding information cascades, specifically peddling rumors is forbidden.

Comment author: GuySrinivasan 05 March 2009 11:10:29PM 4 points [-]

The advantage to hand-washing is that it severely reduces a specific otherwise-easy-to-use vector for disease and there is social pressure due to public restrooms to washing your hands. Can we find something similar for cognitive citizenship? A vector for transmitting bad knowledge, like forwarded emails - maybe in the future I shouldn't just reply with "Nope, Snopes, also stop." but instead ask that the sender include a disclaimer?

How about a vector for transmitting bad cognitive algorithms? That one would be far more valuable to block but I haven't been able to think of a large extant vector at all, much less one that might be attackable.

Comment author: thomblake 05 March 2009 11:52:02PM 2 points [-]

Regarding the e-mail forwards, I usually reply with "Your address has been added to my list of spammers. Any future e-mails from you will automatically be blocked." Take that, Grandma!

Comment author: steven0461 05 March 2009 07:31:48AM *  4 points [-]

Great post, food for thought. I sometimes distinguish between beliefs and impressions, but should do so more.

If ideas change incrementally by mutation, and the average false idea does more damage than the truth, and as ideas get closer to the truth they trend noisily toward doing less damage, is that a general moral argument against spreading and believing specific false ideas that seem beneficial (both because the neighbors of beneficial-seeming false ideas regress to a more damaging mean than the neighbors of the truth, and because the truth gains some stability against mutations by being the truth)?

Comment author: JulianMorrison 05 March 2009 05:32:43PM 8 points [-]

Here's a practice that might help: "why do I think that" monologues. This would be a group but not oppositional activity. The idea is to elaborate on a thing you currently believe to be true by specifying the reasons you believe it, the reasons you believe the reasons, etc and trying to dig out the whole epistemological structure. The purpose of this is not so much to tear apart someone else's epistemological structure (it wouldn't work, nobody learns from that), but rather to learn to see for yourself the points of divergence - which might be far, far upstream of an individual idea.

Comment author: AnnaSalamon 06 March 2009 02:16:46AM 7 points [-]

Good idea.

Making thinking visible, by your suggested "why do I actually think that" monologs, would also help with transfer of useful evidence-gathering or reasoning tricks, so that if e.g. you and I are talking, and you did something useful that I don't know how to do in coming to a particular conclusion, I can see how it worked and maybe copy your trick in general.

I know math/science tutoring works better when people spell out more of their thinking than is common.

Comment author: lessdazed 27 April 2011 11:40:38AM 3 points [-]

One concept sharply distinguishing common law legal systems from Roman law ones are their approaches to evidence.

By separating jurist and fact-finder, (judge and jury in America), systems such as ours compensate for human biases (among their other functions) and prevents the fact finder from obtaining some information that would on average make them dumber. For example, the system can notice that people generally set too much store by past criminal history and hearsay evidence, so a judge restricts when such evidence can even be heard by jurors.

Ideally, a fact finder would only use evidence appropriately and not need to be shielded. Where there is no separate fact finder such as a jury, as in inquisitorial Roman law derived systems, it makes no sense to have rules of evidence by which the judge restricts what a fact finder may hear and consider, as the judge is the fact finder as well. Systems with one judge and no jury are not disadvantaged provided the judge can calibrate according to the evidence no worse than he or she would be able to distinguish which evidence to pass along to a jury, were there one.

I can use a similar practice by asking someone for their opinion and giving them only some of the evidence I have - namely, the evidence I think is of the type that will do them more good than harm to hear.

Comment author: johnny_abacus 06 March 2009 04:36:24AM 3 points [-]

David Stove also talked about it a bit (not focusing on the transmission part but more on detection) with "What is Wrong with out Throughts?" ( http://web.maths.unsw.edu.au/~jim/wrongthoughts.html ). I'm not sure there is a good solution, as it is almost impossible to know whether or not you are in the grip of some irrationality.

To give an example, it is conceptually easy to kill germs - bacteria simply can't handle wide swings in humidity, temperature, acidity, etc. Washing hands with soap and hot water, cooking food, using bleach, etc. are easy ways that are guaranteed to kill bacteria. They have an extremely low failure rate (Anthrax is the toughest bacteria I know of, and it can be killed with enough bleach and ingenuity).

These limitations are caused by limitations in the fundamental processes that make life work. Metabolism has to happen in particular temperature ranges. Cell walls can only be made out of a few sorts of materials, and all of those materials react violently to extremely basic or acidic substances.

The basic problem is that, if there are analogous limitations to "mind viruses", we simply don't know what they are (beyond the trivial making the host commit suicide instantly).

The best I have come up with is the advice that Feynman gave in his "Cargo Cult Science" talk ( http://wwwcdf.pd.infn.it/~loreti/science.html ) - cultivate a brutal sense of honesty so that you have a small edge on the detection side of things.

Comment author: Viliam_Bur 08 September 2011 02:34:11PM *  1 point [-]

The example by David Stove gave me shivers. I only wish it was shorter -- not fewer examples, but shorter author's comments between them.

This discussion is about hand washing, but now I think more about vaccination. I feel like reading Stove's article vaccinated me against most of philosophy.

A good epistemic practice might be courage to say "this is nonsense" or "this is insane" when reading a thoughtless flow of words. Perhaps the karma system of LW should include a reason why someone voted text up or down. Reasons for upvote could be like "interesting", "well referenced" etc., reasons for downvote could be like "useless", "offensive" or "insane".

If some text does not make sense, members of rational community should have courage to say "this does not make sense to me". (People usually don't do this, because they fear it will make them appear stupid.) It is always a useful signal... at best it means that author should communicate more clearly, at worst it means that author wrote nonsense.

Comment author: ArisKatsaris 08 September 2011 05:07:35PM 1 point [-]

Perhaps the karma system of LW should include a reason why someone voted text up or down. Reasons for upvote could be like "interesting", "well referenced" etc., reasons for downvote could be like "useless", "offensive" or "insane".

Suggested implementation: Clicking upvote or downvote could make a tiny textbox next to the thumb appear where you can (but are NOT obliged to) type a maximum of 15 letters, explaining the vote in one word.

Reasons for upvotes appear in tiny green letters, reasons for downvotes appear in tiny red letters. Identical words are not repeated but a +<number of times mentioned> can appear next to them.

Comment author: lessdazed 08 September 2011 03:28:16PM 0 points [-]

There is no getting away from it: the Logical Positivist nosology too is pitifully inadequate. Hegel just is different from Plotinus, and again from Foucault, and so on. Likewise, every specimen from (3) to (40) on my list is different from every other, as well as from the first two. Of course I cannot prove that all those things are different from one another, or even that any two of them are different. So if a Logical Positivist chose to dig in his heels, and insist that the ways in which thought can go wrong are all of them comprehended in the three categories of contingent falsity, self-contradiction, and unverifiability - well, I could not prove him wrong. But it is obvious enough that he is wrong. There are just more things in hell and earth than are dreamed of in his philosophy; thirty-odd more, at the least.

And yet there are philosophers, and beneficiaries of Logical Positivism at that, who actually propose, not to enlarge the Positivist nosology, but to contract it, to the point where it contains only one category! Now I ask you: what ought to be thought of a doctor, even in the most primitive state of medicine, who acknowledges the existence of only one disease? I am referring, of course, to Quine, who wants us to make do just with the category of contingent falsity:14 an excess of Positivist pedestrianism which deserves (though it will not receive in this book) an essay to itself.

That doctor would probably want to replace my broken parts with functional parts, rather than treat my diseases. The horror.

Just a very few of the labels used on this site are passwords I am thinking of, labels of the very few ways the forty are wrong. The resources enabling one to see underlying problems among the forty are on this website. However, it is better not to simply declare: "The problem behind most of them all is X", where X is a label. Someone might believe me!

Comment author: Annoyance 05 March 2009 06:50:53PM 3 points [-]

We can think of cholera transmission (or actually, any memetic spread) as consisting of a feedback loop.

There are positive and negative feedback loops, depending on what properties we're examining: positive loops lead to a greater and greater value of the property, while negative loops converge on some set value.

Ideally we want to set up our mental environments so that error is trapped in negative feedback loops and reduced as much as possible, while correctness is amplified. In terms of assigned probability, wrongness should go to zero and correctness to one.

The methods for bringing this about are widely known but, oddly, not widely recognized and even less widely applied. They're called logic.

Comment author: RobinHanson 05 March 2009 01:21:04PM 4 points [-]

The most promising concrete suggestion I see here is to adopt verbal conventions for distinguishing direct and indirect evidence. I'm not sure the word "impression" really connotes direct evidence, though with enough consistent usage in that mode we might carve out a common meaning to that effect. But we actually have a whole range of indirection; where would the cutoff in that range be? If I actually looked something up recently in an encyclopedia, while someone else just vaguely remembers looking it up sometime long ago, is that my impression or my belief?

Comment author: anonym 08 March 2009 12:01:47AM 2 points [-]

The indication of the (kind of) evidence for a statement is known as evidentiality in linguistics.

The wikipedia article referenced above gives the example of Eastern Pomo, in which a verb takes one of 4 evidential suffixes, corresponding to the type of evidence: nonvisual sensory, inferential, hearsay, or direct knowledge (probably visual).

Comment author: timtyler 21 March 2012 08:56:57PM *  2 points [-]

Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children?

For religions, perhaps see Ben's: Parasite Ecology and the Evolution of Religion.

Comment author: Johnicholas 05 March 2009 02:39:36PM 2 points [-]

There is a notion of an "information cascade", which I think is relevant to this question.

As I understand it If individuals have private information (individual impressions), and also observe other individuals' public actions (beliefs), then it is possible that the group "cascades" to a worse result than one might at first expect.

I don't understand the idea well, my summary may be inaccurate or clumsy.

Comment author: Jack 05 March 2009 09:55:52AM 3 points [-]

First, I'd caution against reflexively questioning appeals to authority. Arguments from authority are not fallacies despite their traditional classification as such. There is no way for an individual to experimentally verify even a small fraction of the things she counts as knowledge- it would be an absurd and unnecessary barrier. Indeed, I think cautioning against arguments from authority is a kind of keeping kosher- an outdated purity norm that is no longer necessary given modern science and method. Once upon a time it made great sense to distrust experts because the experts were often bullshitting and there were few checks to prevent them from doing so. Similarly, now we know how to cook our shellfish and so you're not likely to get sick from eating scallops.

The problem, on the contrary, are claims being passed off as if the maker of the claim has in fact read the experts when they have not. Particularly false claims that do not contradict common sense go by undetected- and do not die. I'm thinking here of something like "Eskimos have an extraordinary number of words for snow because they're around it all the time" (http://en.wikipedia.org/wiki/Eskimo_words_for_snow) Snopes is obviously a fantastic resource in this regard but if we want to stop the spread of empirically false beliefs I might suggest dramatically expanding the use of wikipedia's "citation needed" demand. What if instead of citing claims on occasion or as requested every comment was just assumed to need a citation. If a claim lacked a citation a dozen Less Wrong commenters immediately responded with just the words "citation needed?". If original poster wants to avoid this she simply includes a citation of gives a reason why she didn't "I'm just guessing" or "There are no empirical claims here" etc. Eventually we'd just come to expect a citation or some sort of explanation and if we didn't see one we'd know to immediately question the claim.

(I don't believe I've made any non-obvious empirical claims, but if someone wants to see evidence regarding the superiority of modern science as compared to medieval scholarship I can find that)

Comment author: thomblake 05 March 2009 03:22:01PM 4 points [-]

While I think you might be on the right track with respect to Wikipedia, this wouldn't really work in casual (or even scholarly) discourse. There are a lot of things of which I'm confident and don't have an immediately available justification, and tracking them down would be so time-consuming that I just wouldn't bother to comment on anything.

Also, there is a disanalogy between Wikipedia and other kinds of scholarship; Wikipedia does not allow original research, in which the appropriate citation for a claim might be the preceding argument, and so should not be explicitly stated.

There are two cases where argument from authority is still clearly fallacious:

  1. respecting the authority of someone who is not an expert in the appropriate field - for instance, taking the Pope's word on evolutionary biology

  2. regarding the authority as itself what gives truth to the claim - This happens, for instance, when one makes appeals to one's own authority. If someone asks me for a citation and I say "I'm an expert, and I say so" then that's insufficient.

P.S. You should change that URL to a link so MarkDown doesn't eat it.

Comment author: whpearson 05 March 2009 05:04:27PM 1 point [-]

The trouble with only passing on verified ideas is that it stops you being able to pass on ideas you wish to get verified that need significant resources to do so, and need the help of others. E.g. higgs boson, an AI theory, a new low level computer design.

So perhaps a way of coding ideas, e.g in need of testing, tested myself, tested second hand, publicly available test data.

Colour coding comments on a forum might be a good place to test this kind of scheme. Then people can easily discriminate what is verified and what is speculative.

Comment author: William_Quixote 10 September 2012 12:22:41AM *  0 points [-]

I wonder how “playing devil’s advocate” fits into the epistemic hygiene / good cognitive citizenship world view.

On the one hand, it can reduce group think and broaden the range of areas considered. On the other hand, its called devil’s advocate because you are advocating what are presumably bad ideas. If they are advocated too well, or you are not ‘flagged’ as operating in the devil’s advocate role you might actually be spreading bad ideas.

I was thinking about this subject because I tend to slip into the devil’s advocate role in IRL conversations and was pondering if the fact that I spend a lot of time advocating ideas I don’t support might be epistemiclly harmful (or at least a low value use of time).

Edit: I distinguish this role in casual conversation from a more formal red team approach (which would be known to all team members and so not at risk of mistaking the motivation behind advocacy)

Comment author: Vaniver 10 September 2012 12:55:59AM *  5 points [-]

On the other hand, its called devil’s advocate because you are advocating what are presumably bad ideas.

The term originated with the canonization of saints. The Devil's Advocate was the lawyer tasked with making the argument that the person up for sainthood didn't actually deserve to be recognized as a saint- either the miracles associated with them were faked / not actually miraculous, they did something during their life that the Catholic Church wouldn't want associated with a Saint, or so on. Another lawyer, God's Advocate, was tasked with making the case for sainthood.

The practice was abolished in 1983, which opened the floodgates for granting sainthood as it made the process faster and less difficult. Every now and then, someone will be asked to testify against the potential saint- as Christopher Hitchens famously was with Mother Teresa- and his investigation of the claimed 'miracle' seemed like a pretty clear debunking to me.

In its original form, the Devil's Advocate basically represents not extending the benefit of the doubt to proposed ideas, but examining them critically, and seems like a perfect example of good epistemic hygiene and formal red teaming.

A somewhat more productive interpretation of the conversational approach is probably steel manning, the inversion of straw manning.

Comment author: William_Quixote 10 September 2012 11:11:42AM 1 point [-]

Thanks for the information. Though seeing how formal the original “devil’s advocate” was again makes me worry about the wisdom of doing the same informally. Searching for patterns, it seems like the lauded examples of this are all formal and well flagged.

Comment author: timtyler 21 March 2012 10:02:42PM *  0 points [-]

Peter Richerson here says:

The hypothesis is that as the avenues of nonparental transmission increase, cultural variants that exploit us to reproduce themselves at the expense of our genetic fitness have had an ever easier time spreading.

He goes on to give some data about that.

My favourite example of this sort of thing is the demographic transition in meme-rich Japan. The native humans there live for a long time - but they are practically sterile.

Comment author: timtyler 21 March 2012 08:48:40PM *  0 points [-]

To quote from my 2011 book on memetics:

As Robert Wright (1999) points out:

The more easily viruses are transmitted from body to body, the less their fertility depends on their hosts' survival. So highly lethal viruses tend to evolve in urban areas.

Infectious diseases typically benefit from high host population densities. Many memes certainly like to be in areas where there are lots of people. Modern memes have successfully manipulated many of their human their hosts into living together in cities - attaining high population densities. With human cooperation, they have invented high rise apartment and office blocks - to cram the humans together as tightly as possible - which just happens to create an environment which maximizes the rate of meme spread between humans - allowing memes to evolve and adapt to their hosts faster.

Much the same point applies to the internet.

Comment author: Vaniver 22 March 2012 01:50:06AM 0 points [-]

It's not clear to me high rise apartment and office blocks foster meme spreading more than a hub-spokes model of broadcasters and tight-knit communities. (This may just be a nitpick about 'maximizes', or it may lead to a more subtle point. I'm not quite sure which it is.)