You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Publication: the "anti-science" trope is culturally polarizing and makes people distrust scientists

13 ancientcampus 07 February 2014 05:09PM

Paper by the Cultural Cognition Project: The culturally polarizing effect of the "anti-science trope" on vaccine risk perceptions

This is a great paper (indeed, I think many at LW would find the whole site enjoyable). I'll try to summarize it here.

Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, "The science is strong, the benefits are obvious, the risks are negligible; if you're anti-vaccine then you're anti-science".

Methods: They showed experimental subjects an article basically saying the above.

Results: When reading such an article, a large number of people did not trust vaccines more, but rather, trusted the American Academy of Pediatrics less.

 

My thoughts: I will strive to avoid labeling anybody as being "anti-science" or "simply or willfully ignorant of current research", etc., even when speaking of hypothetical 3rd parties on my facebook wall. This holds for evolution, global warming, vaccines, etc.

///

Also included in the article: references to other research that shows that evolution and global warming debates have already polarized people into distrusting scientists, and evidence that people are not yet polarized over the vaccine issue.

If you intend to read the article yourself: I found it difficult to understand how the authors divided participants into the 4 quadrants (α, ß, etc.) I will quote my friend, who explained it for me:

I was helped by following the link to where they first introduce that model.

The people in the top left (α) worry about risks to public safety, such as global warming. The people in the bottom right (δ) worry about socially deviant behaviors, such as could be caused by the legalization of marijuana.

People in the top right (β) worry about both public safety risks and deviant behaviors, and people in the bottom left (γ) don't really worry about either.

From Capuchins to AI's, Setting an Agenda for the Study of Cultural Cooperation (Part1)

-3 diegocaleiro 27 June 2013 06:08AM
This is a multi-purpose essay-on-the-making, it is being written aiming at the following goals 1) Mandatory essay writing at the end of a semester studying "Cognitive Ethology: Culture in Human and Non-Human Animals" 2) Drafting something that can later on be published in a journal that deals with cultural evolution, hopefully inclining people in the area to glance at future oriented research, i.e. FAI and global coordination 3) Publishing it in Lesswrong and 4) Ultimately Saving the World, as everything should. If it's worth doing, it's worth doing in the way most likely to save the World.
Since many of my writings are frequently too long for Lesswrong, I'll publish this in a sequence-like form made of self-contained chunks. My deadline is Sunday, so I'll probably post daily, editing/creating the new sessions based on previous commentary.


Abstract: The study of cultural evolution has drawn much of its momentum from academic areas far removed from human and animal psychology, specially regarding the evolution of cooperation. Game theoretic results and parental investment theory come from economics, kin selection models from biology, and an ever growing amount of models describing the process of cultural evolution in general, and the evolution of altruism in particular come from mathematics. Even from Artificial Intelligence interest has been cast on how to create agents that can communicate, imitate and cooperate. In this article I begin to tackle the 'why?' question. By trying to retrospectively make sense of the convergence of all these fields, I contend that further refinements in these fields should be directed towards understanding how to create environmental incentives fostering cooperation.

 


 

We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them. - Sam Harris, 2013, The Power Of Bad Incentives

1) Introduction

2) Cultures evolve

Culture is perhaps the most remarkable outcome of the evolutionary algorithm (Dennett, 1996) so far. It is the cradle of most things we consider humane - that is, typically human and valuable - and it surrounds our lives to the point that we may be thought of as creatures made of culture even more than creatures of bone and flesh (Hofstadter, 2007; Dennett, 1992). The appearance of our cultural complexity has relied on many associated capacities, among them:

1) The ability to observe, be interested by, and go nearby an individual doing something interesting, an ability we share with norway rats, crows, and even lemurs (Galef & Laland, 2005).

2) Ability to learn from and scrounge the food of whoever knows how to get food, shared by capuchin monkeys (Ottoni et al, 2005).

3) Ability to tolerate learners, to accept learners, and to socially learn, probably shared by animals as diverse as fish, finches and Fins (Galef & Laland, 2005).

4) Understanding and emulating other minds - Theory of Mind- empathizing, relating, perhaps re-framing an experience as one's own, shared by chimpanzees, dogs, and at least some cetaceans (Rendella & Whitehead, 2001).

5) Learning the program level description of the action of others, for which the evidence among other animals is controversial (but see Cantor & Whitehead, 2013). And finally...

6) Sharing intentions. Intricate understanding of how two minds can collaborate with complementary tasks to achieve a mutually agreed goal (Tomasello et al, 2005).

Irrespective of definitional disputes around the true meaning of the word "culture" (which doesn't exist, see e.g. Pinker, 2007 pg115; Yudkowsky 2008A), each of these is more cognitively complex than its predecessor, and even (1) is sufficient for intra-specific non-environmental, non-genetic behavioral variation, which I will call "culture" here, whoever it may harm.

By transitivity, (2-6) allow the development of culture. It is interesting to notice that tool use, frequently but falsely cited as the hallmark of culture, is ubiquitously equiprobable in the animal kingdom. A graph showing, per biological family, which species shows tool use gives us a power law distribution, whose similarity with the universal prior will help in understanding that being from a family where a species uses tools tells us very little about a specie's own tool use (Michael Haslam, personal conversation).

Once some of those abilities are available, and given an amount of environmental facilities, need, and randomness, cultures begin to form. Occasionally, so do more developed traditions. Be it by imitation, program level imitation, goal emulation or intention sharing, information is transmitted between agents giving rise to elements sufficient to constitute a primeval Darwinian soup. That is, entities form such that they exhibit 1)Variation 2)Heredity or replication 3)Differential fitness (Dennett, 1996). In light of the article Five Misunderstandings About Cultural Evolution (Henrich, Boyd & Richerson, 2008) we can improve Dennett's conditions for the evolutionary algorithm as 1)Discrete or continuous variation 2)Heredity, replication, or less faithful replication plus content attractors 3)Differential fitness. Once this set of conditions is met, an evolutionary algorithm, or many, begin to carve their optimizing paws into whatever surpassed the threshold for long enough. Cultures, therefore, evolve. 

The intricacies of cultural evolution and mathematical and computational models of how cultures evolve have been the subject of much interdisciplinary research, for an extensive account of human culture see Not By Genes Alone (Richerson & Boyd, 2005). For computational models of social evolution, there is work by Mesoudi, Novak, and others e.g. (Hauert et al, 2007). For mathematical models, the aptly named Mathematical models of social evolution: A guide for the perplexed by McElrath and Rob Boyd (2007) makes the textbook-style walk-through. For animal culture, see (Laland & Galef, 2009).

Cultural evolution satisfies David Deutsch's criterion for existence, it kicks back, it satisfies the evolutionary equivalent of the  condition posed by the Quine-Putnam Indispensability argument in mathematics, i.e. it is a sine qua non condition for understanding how the World works nomologically. It is falsifiable to Popperian content, and it inflates the Worlds ontology a little, by inserting a new kind of "replicator", the meme. Contrary to what happened on the internet, the name 'meme' has lost much of it's appeal within cultural evolution theorists, and "memetics" is considered by some to refer only to the study of memes as monolithic atomic high fidelity replicators, which would make the theory obsolete. This has created the following conundrum: the name 'meme' remains by far the most well known one to speak of "that which evolves culturally" within, and specially outside, the specialist arena. Further, the niche occupied by the word 'meme' is so conceptually necessary within the area to communicate and explain that it is frequently put under scare quotes, or some other informal excuse. In fact, as argued by Tim Tyler - who frequently posts here - in the very sharp Memetics (2010), there are nearly no reasons to try to abandon the 'meme' meme, and nearly all reasons (practicality, Qwerty reasons, mnemonics) to keep it. To avoid contradicting the evidence ever since Dawkins first coined the term, I suggest we must redefine Meme as an attractor in cultural evolution (dual-inheritance) whose development over time structurally mimics to a significant extent the discrete behavior of genes, frequently coinciding with the smallest unit of cultural replication. The definition is long, but the idea is simple: Memes are not the best analogues of genes because they are discrete units that replicate just like genes, but because they are continuous conceptual clusters being attracted to a point in conceptual space whose replication is just like that of genes. Even more simply, memes are the mathematically closest things to genes in cultural evolution. So the suggestion here is for researchers of dual-inheritance and cultural evolution to take off the scare quotes of our memes and keep business as usual.  

The evolutionary algorithm has created a new attractor-replicator, the meme, it didn't privilege with it any specific families in the biological trees and it ended up creating a process of cultural-genetic coevolution known as dual-inheritance. This process has been studied in ever more quantified ways by primatologists, behavioral ecologists, population biologists, anthropologists, ethologists, sociologists, neuroscientists and even philosophers. I've shown at least six distinct abilities which helped scaffold our astounding level of cultural intricacy, and some animals who share them with us. We will now take a look at the evolution of cooperation, collaboration, altruism, moral behavior, a sub-area of cultural evolution that saw an explosion of interest and research during the last decade, with publications (most from the last 4 years) such as The Origins of Morality, Supercooperators, Good and Real, The Better Angels of Our Nature, Non-Zero, The Moral Animal, Primates and Philosophers, The Age of Empathy, Origins of Altruism and Cooperation, The Altruism Equation, Altruism in Humans, Cooperation and Its Evolution, Moral Tribes, The Expanding Circle, The Moral Landscape.

3) Cooperation evolves

Shortly describe why and show some inequations under which cooperation is an equelibrium, or at least an Evolutionarily Stable Strategy.

4) The complexity of cultural items doesn't undermine the validity of mathematical models.

 4.1) Cognitive attractors and biases substitute for memes discreteness

The math becomes equivalent.

 4.2) Despite the Unilateralist Curse and the Tragedy of the Commons, dyadic interaction models help us understand large scale cooperation

Once we know these two failure modes, dyadic iterated (or reputation-sensitive) interaction is close enough.

5) From Monkeys to Apes to Humans to Transhumans to AIs, the ranges of achievable altruistic skill.

Possible modes of being altruistic. Graph like Bostrom's. Second and third order punishment and cooperation. Newcomb-like signaling problems within AI.

6) Unfit for the Future: the need for greater altruism.

We fail and will remain failing in Tragedy of the Commons problems unless we change our nature.

7) From Science, through Philosophy, towards Engineering: the future of studies of altruism.

Philosophy: Existential Risk prevention through global coordination and cooperation prior to technical maturity. Engineering Humans: creating enhancements and changing incentives. Engineering AI's: making them better and realer.

8) A different kind of Moral Landscape

Like Sam Harris's one, except comparing not how much a society approaches The Good Life (Moral Landscape pg15), but how much it fosters altruistic behaviour.

9) Conclusions

I haven't written yet, so I don't have any!

 

 

 


 

Bibliography (Only of the part already written, obviously):

Cantor, M., & Whitehead, H. (2013). The interplay between social networks and culture: theoretically and among whales and dolphins. Philosophical Transactions of the Royal Society B: Biological Sciences368(1618).

Dennett, D. C. (1996). Darwin's dangerous idea: Evolution and the meanings of life (No. 39). Simon & Schuster.

Dennett, D. C. (1992). The self as a center of narrative gravity. Self and consciousness: Multiple perspectives.

Galef Jr, B. G., & Laland, K. N. (2005). Social learning in animals: empirical studies and theoretical models. Bioscience55(6), 489-499.

Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A., & Sigmund, K. (2007). Via freedom to coercion: the emergence of costly punishment. science316(5833), 1905-1907.

Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural evolution. Human Nature, 19(2), 119-137.

Hofstadter, D. R. (2007). I am a Strange Loop. Basic Books

McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. University of Chicago Press.

Ottoni, E. B., de Resende, B. D., & Izar, P. (2005). Watching the best nutcrackers: what capuchin monkeys (Cebus apella) know about others’ tool-using skills. Animal cognition8(4), 215-219.

Persson, I., & Savulescu, J. Unfit for the Future: The Need for Moral Enhancement Oxford: Oxford University Press, 2012 ISBN 978-0199653645 (HB)£ 21.00. 160pp. On the brink of civil war, Abraham Lincoln stood on the steps of the US Capitol and appealed.

Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Viking Adult.

Rendella, L., & Whitehead, H. (2001). Culture in whales and dolphins.Behavioral and Brain Sciences24, 309-382.

Richardson, P. J., & Boyd, R. (2005). Not by genes alone. University of Chicago Press.

Tyler, T. (2011). Memetics: Memes and the Science of Cultural Evolution. Tim Tyler.

Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition.Behavioral and brain sciences28(5), 675-690.

Yudkowsky, E. (2008A). 37 ways words can be wrong. Available at http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/

Call For Agreement: Should LessWrong have better protection against cultural collapse?

3 Epiphany 03 September 2012 05:35AM

As you are probably already aware, many internet forums experience a phenomenon known as "eternal September".  Named after a temporary effect where the influx of college freshmen would throw off a group's culture every September, eternal September is essentially what happens when standards of discourse and behavior degrade in a group to the point where the group loses it's original culture.  I began focusing on solving this problem and offered to volunteer my professional web services to get it done because:

- When I explained that LessWrong could grow a lot and volunteered to help with growth, various users expressed concerns about growth not always being good because having too many new users at once can degrade the culture.

- There has been concern from Eliezer about the site "going to hell" because of trolling.

- Eliezer has documented a phenomenon that subcultures know as infiltration by "poseurs" happening in the rationalist community.  He explains that rationalists are beginning to be inundated by "undiscriminating skeptics" and has stated that it's bad enough that he needed to change his method of determining who is a rationalist.  The appearance of poseurs doesn't guarantee that a culture will be washed away by main-streamers, but may signal that a culture is headed in that direction, and it does confirm that a loss of culture is a possibility - especially if there got to be so many undiscriminating skeptics as to form their own culture and become the new majority at LessWrong.

  My plan to prevent eternal September sparked a debate about whether eternal September protection is warranted.  Lukeprog, being the decision maker whose decision is needed for me to be allowed to do this as a volunteer, requested that I debate this with him because he was not convinced but might change his mind.

 

Here are some theories about why eternal September happens:

1. New to old user ratio imbalance:

  New users need time to adjust to a forum's culture.  Getting too many new users too fast will throw off the ratio of new to old users, meaning that most new users will interact with each other rather than with older users, changing the culture permanently.

2. Groups tend to trend toward the mainstream:

  Imagine some people want to start a group.  Why are they breaking away from the mainstream?  Because their needs are served there?  Probably not.  They most likely have some kind of difference that makes them want to start their own group.  Of course not everyone fits nicely into "different" and "mainstream", no matter what type of difference you look at.  So, as a forum grows, instead of attracting people who fit nicely into the "different" category, you attract people who are similar to those in the different category.  People way on the mainstream end of the spectrum generally are not attracted to things that are very different.  But imagine how this progresses over time.  I'll create a scale between green and purple.  We'll say the green people are different and the purple people are mainstream.  So, some of the most green folks make a green forum.  Now, people who are green and similar - those with an extra tinge of red or blue or yellow join.  People in the mainstream still aren't attracted, however, since there are still more in-between people than solid green or purple people, the most greenish in-between people begin to dominate.  They and the original green people still enjoy conversation - they're similar enough to share the culture and enjoy mutual activities. But the greenish in-between people start to attract in-between people that are neither more purple or more green.  There are more in-between people than greenish in-between or green people, because purple people dominate in their larger culture, so in-between people quickly outnumber the green people.  This may still be fine because they may adjust to the culture and enjoy it, finding it a refreshing alternative to purple culture.  But the in-between people attract people who are more purplish in-betweeners than greenish in-betweeners.  There are more of those than the in-between people, so the culture now shifts to be closer to mainstream purple than different green.  At this point, it begins to attract the attention of the solid purple main streamers.  "Oh!  Our culture, but with a twist!"  They think.  Now, droves of purple main stream people deluge the place looking for "something a little different".  Instead of valuing the culture and wanting to assimilate, they just want to enjoy novelty.  So, they demand changes to things they don't like to make it suit them better.  They justify this by saying that they're the majority.  At that point, they are.

3.  Too many trolls scare away good people and throw off the balance.

 

Which theory is right?


  All of them likely play a role.

 

  I've seen for myself that trolls can scare the best people out of a forum, ruining the culture. 

  I've heard time and time again that subculture movements have problems with being watered down by mainstream folks until their cultures die and don't feel worth it anymore to the original participators.  A lot of you have probably heard of the term "poseurs".  With poseurs in a subculture, it's not that too many new people joined at once, but that the wrong sort of people joined.  The view is that there are people who are different enough to "get" their movement, and people who are not.  Those who aren't similar decided to try to appear like them even though they're not like them on the inside.  Essentially, a large number of people much nearer to the mainstream got involved, so the group was no longer a haven for people with their differences.

  And I think it's a no-brainer that if a group gets enough newbies at once, old members can't help them adjust to the culture, and the newbies will form a new culture and become a new majority.

  Also, I think all of these can combine together, create feedback loops, and multiply the others.

 

Theory about cause and effect interactions that lead to endless September:

 1.  A group of people who are very different break away from the mainstream and form a group.
 2.  People who are similarly different but not AS different join the group.
 3.  People who are similar to the similarly different people, but even less similar to the different people join the group.
 4.  It goes on this way for a while.  Since there are necessarily more people who are mainstream than different, new generations of new users may be less and less like the core group.
 5.  The group of different people begins to feel alienated with the new people who are joining.
 6.  The group of different people begin to ignore the new people.
 7.  The new people form their own culture with one another, excluding old people, because the old people are ignoring them.
 8.  Old people begin to anticipate alienation and start to see new users through tinted lenses, expecting annoyance.
 9.  New people feel alienated by the insulting misinterpretations that are caused by the expectation that they're going to be annoying. 
10.  The unwelcoming environment selects for thick-skinned people.  A higher proportion of people like trolls, leaders, spammers, debate junkies, etc are active.
11.  Enough new people who are ignored and failed to acculturate accumulate, resulting in a new majority.  If trolls are kept under control, the new culture will be a watered down version of the original culture, possibly not much different from mainstream culture.  If not, see the final possibility.
12.  If a critical mass of trolls, spammers and other alienating thick-skinned types is reached due to an imbalance or inadequate methods of dealing with them, they might ward off old users, exacerbating the imbalance that draws a disproportionate number of thick-skinned types in a feedback loop and then take over the forum.  (Why fourchan /b isn't known for having sweet little girls and old ladies.)

 

Is LessWrong at risk?

  1.  Eliezer has written about rationalists being infiltrated by main-streamers who don't get it, aka "poseurs".

  Eliezer explains in Undiscriminating Skeptics that he can no longer determine who is a rationalist based on how they react to the prospect of religious debates, and now he has to determine who is a rationalist based on who is thinking for themselves.  This is the exact same problem other subcultures have - they say the new people aren't thinking for themselves.  We might argue "but we want to spread the wonderful gift of rational thought to the mainstream!" and I would agree with that.  However, if all they're able to take away from joining is that there are certain things skeptics always believe, all they'll be taking away from us is an appeal to skepticism.  That's the kind of thing that happens when subcultures are over-run by mainstream folks.  They do not adopt the core values.  Instead, they run roughshod over them.  If we want undiscriminating skeptics to get benefits from refining the art of rationality, we have to do something more than hang out in the same place.  Telling them that they are poseurs doesn't work for subcultures, and I don't think Eliezer telling them that they're undiscriminating skeptics will solve the problem.  Getting people to think for themselves is a challenge that should not be undertaken lightly.  To really get it, and actually base your life on rationality, you've either got to be the right type, a "natural" who "just gets it" (like Eliezer who showed signs as a child when he found a tarnished silver amulet inscribed with Bayes's Theorem) or you have to be really dedicated to self-improvement.

  2. I have witnessed a fast-growing forum actually go exponential.  Nothing special was being done to advertise the forum. 

  Obviously, this risks deluging old members in a sea of newbies that would be large enough to create a newbie culture and form a new majority.

  3. LessWrong is growing fast and it's much bigger than I think everyone realizes.

  I made a LessWrong growth bar graph showing how LessWrong has gained over 13,000 members in under 3 years (Nov 2009 - Aug 2012).  LessWrong had over 3 million visits in the last year.  The most popular post has gotten over 200,000 views.  Yes I mean there are posts on here that are over 1/5 of their way to a million views, I did not mistype.  This is not a tiny community website anymore.  I see signs that people are still acting that way, like when people post their email addresses on the forum.  People don't seem to realize how big LessWrong has gotten.  Since this happened in a short time, we should be wondering how much further it will go, and planning for the contingency that could become huge.

  4. LessWrong has experienced at least one wild spike in membership.  Spikes can happen again.

  We can't control the ups and downs in visitors to the site.  That could happen again.  It could last for longer than a month.  According to Vladmir, using wget, we've got something like 600 - 1000 active users posting per month.  We've got about 300 users joining per month from the registration statistics.  What would happen if we got 900 each month for a few months in a row?  A random spike could conceivably overwhelm the members.

  5. Considering how many readers it has, LessWrong could get Slashdotted by somebody big.

  If you've ever read about the Slashdot effect, you'll know that all it might take to get a deluge bigger than we can handle is to be linked to by somebody big.  What if Slashdot links to LessWrong?  Or somebody even bigger?  We have at least one article on LessWrong that got about half as many visits as a hall of fame level Slashdot article.  The article "Scientologists Force Comment Off Slashdot" got 383692 visits on Slashdot, compared with LessWrong's most popular article at 211,000 visits. (Cite: Slashdot hall of fame.)  LessWrong is gaining popularity fast.  It's not a small site anymore.  And there are a lot of places that could Slashdot us.  I may be just a matter of time before somebody pays attention, does an article on LessWrong, and it gets flooded.

  6. We all want to grow LessWrong, and people may cause rapid growth before thinking about the consequences.

  What if people start growing LessWrong and wildly succeed?  I would like to be helping LessWrong grow but I don't want to do it until I feel the culture is well-protected.

  7. Some combination of these things might happen and deluge old people with new people.

 

Does LessWrong need additional eternal September protection?

  Lukeprog's main argument is that we don't have to worry about eternal September because we have vote downs. Here's why vote downs are not going to protect LessWrong:

  1.  If the new to old user ratio becomes unbalanced, or the site is filled with main streamers who take over the culture, who is going to get voted down most?  The new users, or the old ones?  The old members will be outnumbered, so it will likely be old members.

  2. This doesn't prevent new users from interacting primarily with new users.  If enough people join, there may not be enough old users doing vote downs to discourage them anymore.  That means if the new to old user ratio were to become unbalanced, new users may still interact primarily with new users and form their own, larger culture, a new majority.

  3.  Let's say Fourchan /b decides to visit.  A hundred trolls descend upon LessWrong.  The trolls, like everybody else, have the ability to vote down anything they want.  The trolls of course will enjoy harassing us endlessly with vote downs.  They will especially enjoy the fact that it only takes three of them to censor somebody.  They will find it a really, really special treat that we've made it so that anybody who responds to a censored person ends up getting points deducted.  From a security perspective, this is probably one of the worst things that you could do.  I came up with an idea for a much improved vote down plan.

 

Possibly more important: What happens if we DO prevent an eternal September?

  What we are deciding here is not simply "do we want to protect this specific website from cultural collapse?" but "How do we want to introduce the art of refining rationality to the mainstream public?"

  Why do main streamers deluge new cultures and what happens after that?  What do they get out of it?  How does it affect them in the long-term?  Might being deluged by main streamers make it more likely for main streamers to become better at rational thought, like a first taste makes you want more?

  If we kept them from doing that, what would happen, then? 

  Say we don't have a plan.  LessWrong is hit by more users than it can handle.  Undiscriminating skeptics are voting down every worthwhile disagreement.  So, as an emergency measure, registrations are shut off, the number of visits to the website grows and then falls.  We succeed in keeping out people who don't get it.  After it has peaked, the fad is over.  Worse, we've put them off and they're offended.  Or, we don't shut off registrations, we're deluged, and now everyone thinks that a "rationalist" an "undiscriminating skeptic".  We've lost the opportunity to get through to them, possibly for good.  Will they ever become more rational?  LessWrong wants to make the world a more rational place.  An opportunity to accomplish that goal could happen.  Eliezer figured out a way to make rationality popular.  Millions of people have read his work.  This could go even bigger.

  This is why I suggested two discussion areas - then we get to keep this culture and also have an opportunity to experiment with ways for the people who are not naturals at it to learn faster.  If we succeed in figuring out how to get through to them, we will know that the deluge will be constructive, if one happens.  Then, we can even invite one on purpose.  We can even advertise for that and I'd be happy to help.  But if we don't start with eternal September protection, we could lose all this progress, lose our chance to get through to the mainstream, and pass like a fad.

  For that reason, even if eternal September doesn't look likely to you after everything that I've explained above, I say it is still worthwhile to develop a tested technique to preserve LessWrong culture against a deluge and get through to those who are not naturals.  Not doing so takes a risk with something important.

 

Please critique.

  Your honest assessments of my ideas are welcome, always.