Near-Term Risk: Killer Robots a Threat to Freedom and Democracy
A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy. Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.
He explains that a human decision-maker is not a necessity for combat drones to function. This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government. According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.
Daniel Suarez: The kill decision shouldn't belong to a robot
One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones. Drones are completely obedient but humans can throw a revolt. Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain. Drones won't provide this type of protection whatsoever. Obviously, relying on human decision making is not perfect. Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us. Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny. The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.
Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers. Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.
Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal. There are, obviously, pros and cons to this method. Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power. This method has pros and cons as well. I came up with a third method which is here. I think it's better than the alternatives but I would like more feedback.
As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI). That means it's up to us - the people - to develop our understanding of this subject and spread the word to others. Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this. I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue. I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.
To keep things organized, let's put each potential solution, pro and con into a separate comment.
Poll - Is endless September a threat to LW and what should be done?
Various people raised concerns that growth might ruin the culture after reading my "LessWrong could grow a lot" thread. There has been some discussion about whether endless September, a phenomenon that kills online discussion groups, is a significant threat to LessWrong and what can be done. I really care about it, so I volunteered to code a solution myself for free if needed. Luke invited debate on the subject (the debate is here) and will be sent the results of this poll and asked to make a decision. It was suggested by him in an email that I wait a little while and then post my poll (meta threads are apparently annoying to some, so we let people cool off). Here it is, preceded by a Cliff's notes summary of the concerns.
Why this is worth your consideration:
- Yvain and I checked the IQ figures in the survey against other data this time, and the good news is that it's more believable that the average LessWronger is gifted. The bad news is that LessWrong's IQ average has decreased on each survey. It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong's average has lost 52% of it's giftedness since March of 2009.
- Eliezer documented the arrival of poseurs (people who superficially copycat cultural behaviors - they are reported to over-run subcultures) which he termed "Undiscriminating Skeptics".
- Efforts to grow LessWrong could trigger an overwhelming deluge of newbies.
- LessWrong registrations have been increasing fast and it's possible that growth could outstrip acculturation capacity. (Chart here)
- The Singularity Summit appears to cause a deluge of new users that may have similar effect to the September deluges of college freshman that endless September is named after. (This chart shows a spike correlated with the 2011 summit where 921 users joined that month, which is roughly equal to the total number of active users LW tends to have in a month if you go by the surveys or Vladmir's wget.)
- A Slashdot effect could result in a tsunami of new users if a publication with lots of readers like the Wall Street Journal (they used LessWrong data in this article) decides to write an article on LessWrong.
- The sequences contain a lot of the culture and are long meaning that "TLDR" may make LessWrong vulnerable to cultural disintegration. (New users may not know how detailed LW culture is or that the sequences contain so much culture. I didn't.)
- Eliezer said in August that the site was "seriously going to hell" due to trolls.
- A lot of people raised concerns.
Two Theories on How Online Cultures Die:
Overwhelming user influx.
There are too many new users to be acculturated by older members, so they form their own, larger new culture and dominate the group.
Trending toward the mean.
A group forms because people who are very different want a place to be different together. The group attracts more people that are closer to mainstream than people who are equally different because there are more mainstream people than different people. The larger group attracts people who are even less different in the original group's way for similar reasons. The original group is slowly overwhelmed by people who will never understand because they are too different.
Poll Link:
Request for Feedback:
In addition to constructive criticism, I'd also like the following:
-
Your observations of a decline or increase in quality, culture or enjoyment at LessWrong, if any.
-
Ideas to protect the culture.
-
Ideas for tracking cultural erosion.
- Ways to test the ideas to protect the culture.
Female Test Subject - Convince Me To Get Cryo
I heard that women are difficult to convince when it comes to signing up for cryo. In mentioning cryonics to a dying person, there seems to be a consensus that it's not going to happen. I encountered a post: Years saved: Cryonics vs VillageReach, which addressed my main objection (that the amount of money spent on cryo may be better spent on saving starving children, especially considering that you could save multiple children for that amount of money with high probability whereas you save only one life with low probability by paying for cryo). Now I'm open to being persuaded.
My first instinct was to go read a lot about cryo, but it dawned on me that there are a lot of people here who will want to convince family members, some of them female, to sign up - and these people may appreciate the opportunity to practice on somebody. It has been argued that "Brilliant and creative minds have explored the argument territory quite thoroughly." but if we already know all of the objections and have working rebuttals for each, why is it still thought of as extra difficult to get through to women? If there were a solution to this, it would not be seen as difficult. There must be something that pro-cryo people need for persuading women that they either haven't figured out or aren't good enough at yet.
So, I decided to offer myself for experiments in attempting to convince a woman to sign up for cryo and took a poll in an open thread to see whether there was interest. I don't claim to be perfectly representative of the female population, but I assume that I will have at least some objections in common with them and that persuading me would still be good practice for anyone planning to convince family members in the future. Having a study on persuading women would be more scientific but how do you come up with hypotheses to test for such a study if you have no actual experience persuading women?
So, here is your opportunity to try whatever methods of persuasion you feel like with no guilt, explore my full list of objections without worrying about it being socially awkward, (I will even share cached religious thoughts, as annoyed as I am that I still have them.), and I will document as many of my impressions and objections as I can before I forget them.
I am putting each objection / impression into a new comment for organization. Also, I have decided to avoid reading anything further on cryo, until/unless it is suggested by one of my persuaders.
Well, have fun getting inside my head.
Elitism isn't necessary for refining rationality.
Note: After writing this post, I realized there's a lot I need to learn about this subject. I've been thinking a lot about how I use the word "elitism" and what it meant to me. I was unaware that there are a large number of people who use the word to describe themselves and mean something totally different from the definition that I had. This resulted in my perception that people who were using the word to describe themselves were being socially inept. I now realize that it's not a matter of social ineptness, that it may be more of a matter of political sides. I also realized that mind-kill reactions may be influencing us here (myself included). So, now my goal is to make sure I understand both sides thoroughly to transcend these mind-kill reactions and explain to others how I accomplished this so that none of us has to have them. I think these sides can get along better. That is what I ultimately want - for the gifted population and the rest of the world to understand one another better, for the privileged and the disadvantaged to understand one another better, and for the tensions between those groups to be reduced so that we can work together effectively. I realize that this is not a simple undertaking, but this is a very important problem to me. I see this being an ongoing project in my life. If I don't seem to understand your point of view on this topic, please help me update. I want to understand it.
TLDR: OMG a bunch of people seem to want to use the word "elitist" to describe LessWrong but I know that this can provoke hatred. I don't want to be smeared as an elitist. I can't fathom why it would be necessary for us to call ourselves "elitists".
I have noticed a current of elitism on LessWrong. I know that not every person here is an elitist, but there are enough people here who seem to believe elitism is a good thing (13 upvotes!?) that it's worth addressing this conflict. In my experience, the word "elitism" is a triggering word - it's not something you can use easily without offending people. Acknowledging intellectual differences is a touchy subject also, very likely to invite accusations of elitism. From what I've seen, I'm convinced that using the word "elitism" casually is a mistake, and referring to intellectual differences incautiously is also risky.
Here, I analyze the motives behind the use of the word elitism, make a suggestion for what the main conflict is, mention a possible solution, talk about whether the solution is elitist, what elitism really means, and what the consequences may be if we allow ourselves to be seen as elitists.
The theme I am seeing echoed throughout the threads where elitist comments surfaced is "We want quality" and "We want a challenging learning environment". I agree that quality goals and a challenging environment are necessary for refining rationality, but I disagree that elitism is needed.
I think the problem comes in at the point where we think about how challenging the environment should be. There's a conflict between the website's main vision: spreading rationality (detailed in: Rationality: Common Interest of Many Causes) and striving for the highest quality standards possible (detailed in Well-Kept Gardens Die By Pacifism).
If the discussions are geared for beginners, advanced people will not learn. If the discussions are geared for advanced people, beginners are frustrated. It's built into our brains. Psychologist Mihaly Csikszentmihalyi, author of "Flow: The psychology of optimal experience" regards flow, the feeling of motivation and pleasure you get when you're appropriately challenged, to be the secret to happiness and he explains that if you aren't appropriately challenged, you're either going to feel bored or frustrated depending on whether the challenge is too small or too great for your ability level.
Because our brains never stop rewarding and punishing us with flow, boredom and frustration, we strive for that appropriate challenge constantly. Because we're not all at the same ability level, we're not all going to flow during the same discussions. We can't expect this to change, and it's nobody's fault.
This is a real conflict, but we don't have to choose between the elitist move of blocking everyone that's not at our level vs. the flow killing move of letting the challenge level in discussions decrease to the point where it results in everyone's apathy - we can solve this.
Why bother to solve it? If your hope is to raise the sanity waterline, you cannot neglect those who are interested in rational thought but haven't yet gotten very far. Doing so would limit your impact to a small group, failing to make a dent in overall sanity. If you neglect the small group of advanced rationalists, then you've lost an important source of rational insights that people at every level might learn from and you will have failed to attract the few and precious teachers who will assist the beginners in developing further faster.
And there is a solution; summarized in one paragraph: Make several areas divided by their level of difficulty. Advanced learners can learn in the advanced area, beginners in the beginner area. That way everyone learns. Not every advanced person is a teacher, but if you put a beginner area and an advanced area on the same site, some people from the advanced area will help get the beginners further. One-on-one teaching isn't the only option - advanced people might write articles for beginners and get through to thousands at once. They might write practice quizzes for them to do (not hard to implement from a web developer's perspective). There are other things. (I won't get into them here.)
This brings me to another question: if LessWrong separates the learning levels, would the separation qualify as elitism?
I think we can all agree that people don't learn well in classes that are too easy for them. If you want advanced people to improve, it's an absolute necessity to have an advanced area. I'm not questioning that. I'm questioning whether it qualifies under the definition of elitism:
e·lit·ism
Spreading rationality empowers people. If you wanted to take power over them, you'd horde it. By posting our rational insights in public, we share them. We are not hoarding them and demanding to be made rulers because of our power. We are giving them away and hoping they improve the world.
Using rationality as a basis for rule makes no sense anyway. If you have a better map of the territory, people should update because you have a better map (assuming you overcome inferential distances). Forcing an update because you want to rule would only amount to an appeal to authority or coercion. That's not rational. If you show them a more complete map and they update, that isn't about you - you should be updating your map when the time comes, too. It's the territory that rules us all. You are only sharing your map.
For the second definition, there are two pieces. "Consciousness of or pride in" and "select or favored group". I can tell you one thing for certain: if you form a group of intellectual elitists, they will not be considered "select or favored" by the general population. They will be treated as the scum on the bottom of scum's shoe.
For that reason, any group of intellectual elitists will quickly become an oxymoron. First, they'll have to believe that they are "select and favored" when they are not, and perhaps justify this with "we are so deserving of being select and favored that no one can see it but us" (which may make them hopelessly unable to update). Second, the attitude of superiority is likely to provoke such anti-intellectual counter-prejudice that the resulting oppression could make them ineffectual. Powerless to get anywhere because they are so hated, their "superiority" will make them into second class citizens. You don't achieve elite status by being an intellectual elitist.
In the event that LessWrong was considered "select" or "favored" by the outside population, would "consciousness" of that qualify the members as elitists? If you use the literal definition of "consciousness", you can claim a literal "yes" - but it would mean that simply acknowledging a (hypothetical) fact (independent market research surveys, we'll say) should be taken as automatic proof that you're an arrogant scumbag. That would be committing Yvain's "worst argument in the world", guilt by association. We can't assume that everyone who acknowledges popularity or excellence is guilty of wrongdoing.
So let's ask this: Why does elitism have negative connotations? What does it REALLY mean when people call a group of intellectuals "elitists"?
I think the answer to this is in Jane Elliot's brown eyes, blue eyes experiment. If you're not familiar with it, a school teacher named Jane Elliot, horrified by the assassination of Martin Luther King, Jr. decided to teach her class a lesson about prejudice. She divided the class into two groups - brown eyes and blue eyes. She told them things like brown eyed kids are smarter and harder-working than blue eyed kids. The children reacted dramatically:
When people complain of elitism, what they seem to be reacting to is a concern that feeling "better than others" will be used as an excuse for abuse - either via coercion, or by sabotaging their sense of self-worth and intellectual performance.
The goal of LessWrong is to spread rationality in order to make a bigger difference in the world. This has nothing to do with abusing people. Just because some people with advanced abilities choose to use them as an excuse to abuse other people, it doesn't mean that anybody here has to do that. Just because some of us might have advanced abilities and are also aware of them does not mean we need to commit Yvain's "the worst argument in the world" by assuming the guilt that comes with elitism. We can reject this sort of thinking. If people tell you that you're an elitist because you want a challenging social environment to learn in, or because you want to make the project that is the LessWrong blog as high quality as it can be, you can refuse to be labeled guilty.
Refusing to be guilty by association takes more work than accepting the status quo but what would happen if we allowed ourselves to be disrespected for challenging ourselves and striving for quality? If we agree with them, we're viewing positive character traits as part of a problem. That encourages people to shoot themselves in the foot - and they can point that same gun at all of humanity's potential, demanding that nobody seeks the challenging social environment they need to grow, that nobody sets any learning goals to strive for because quality standards are elitist. To allow a need for challenges and standards to be smeared as elitist will only hinder the spread of rationality.
How many may forgo refining rationality because they worry it will make them look like an elitist?
These are the reasons I choose to be non-abusive and to send a message to the world that non-abusive intellectuals exist.
What do you think of this?
Call For Agreement: Should LessWrong have better protection against cultural collapse?
As you are probably already aware, many internet forums experience a phenomenon known as "eternal September". Named after a temporary effect where the influx of college freshmen would throw off a group's culture every September, eternal September is essentially what happens when standards of discourse and behavior degrade in a group to the point where the group loses it's original culture. I began focusing on solving this problem and offered to volunteer my professional web services to get it done because:
- When I explained that LessWrong could grow a lot and volunteered to help with growth, various users expressed concerns about growth not always being good because having too many new users at once can degrade the culture.
- There has been concern from Eliezer about the site "going to hell" because of trolling.
- Eliezer has documented a phenomenon that subcultures know as infiltration by "poseurs" happening in the rationalist community. He explains that rationalists are beginning to be inundated by "undiscriminating skeptics" and has stated that it's bad enough that he needed to change his method of determining who is a rationalist. The appearance of poseurs doesn't guarantee that a culture will be washed away by main-streamers, but may signal that a culture is headed in that direction, and it does confirm that a loss of culture is a possibility - especially if there got to be so many undiscriminating skeptics as to form their own culture and become the new majority at LessWrong.
My plan to prevent eternal September sparked a debate about whether eternal September protection is warranted. Lukeprog, being the decision maker whose decision is needed for me to be allowed to do this as a volunteer, requested that I debate this with him because he was not convinced but might change his mind.
Here are some theories about why eternal September happens:
1. New to old user ratio imbalance:
New users need time to adjust to a forum's culture. Getting too many new users too fast will throw off the ratio of new to old users, meaning that most new users will interact with each other rather than with older users, changing the culture permanently.
2. Groups tend to trend toward the mainstream:
Imagine some people want to start a group. Why are they breaking away from the mainstream? Because their needs are served there? Probably not. They most likely have some kind of difference that makes them want to start their own group. Of course not everyone fits nicely into "different" and "mainstream", no matter what type of difference you look at. So, as a forum grows, instead of attracting people who fit nicely into the "different" category, you attract people who are similar to those in the different category. People way on the mainstream end of the spectrum generally are not attracted to things that are very different. But imagine how this progresses over time. I'll create a scale between green and purple. We'll say the green people are different and the purple people are mainstream. So, some of the most green folks make a green forum. Now, people who are green and similar - those with an extra tinge of red or blue or yellow join. People in the mainstream still aren't attracted, however, since there are still more in-between people than solid green or purple people, the most greenish in-between people begin to dominate. They and the original green people still enjoy conversation - they're similar enough to share the culture and enjoy mutual activities. But the greenish in-between people start to attract in-between people that are neither more purple or more green. There are more in-between people than greenish in-between or green people, because purple people dominate in their larger culture, so in-between people quickly outnumber the green people. This may still be fine because they may adjust to the culture and enjoy it, finding it a refreshing alternative to purple culture. But the in-between people attract people who are more purplish in-betweeners than greenish in-betweeners. There are more of those than the in-between people, so the culture now shifts to be closer to mainstream purple than different green. At this point, it begins to attract the attention of the solid purple main streamers. "Oh! Our culture, but with a twist!" They think. Now, droves of purple main stream people deluge the place looking for "something a little different". Instead of valuing the culture and wanting to assimilate, they just want to enjoy novelty. So, they demand changes to things they don't like to make it suit them better. They justify this by saying that they're the majority. At that point, they are.
3. Too many trolls scare away good people and throw off the balance.
Which theory is right?
All of them likely play a role.
I've seen for myself that trolls can scare the best people out of a forum, ruining the culture.
I've heard time and time again that subculture movements have problems with being watered down by mainstream folks until their cultures die and don't feel worth it anymore to the original participators. A lot of you have probably heard of the term "poseurs". With poseurs in a subculture, it's not that too many new people joined at once, but that the wrong sort of people joined. The view is that there are people who are different enough to "get" their movement, and people who are not. Those who aren't similar decided to try to appear like them even though they're not like them on the inside. Essentially, a large number of people much nearer to the mainstream got involved, so the group was no longer a haven for people with their differences.
And I think it's a no-brainer that if a group gets enough newbies at once, old members can't help them adjust to the culture, and the newbies will form a new culture and become a new majority.
Also, I think all of these can combine together, create feedback loops, and multiply the others.
Theory about cause and effect interactions that lead to endless September:
1. A group of people who are very different break away from the mainstream and form a group.
2. People who are similarly different but not AS different join the group.
3. People who are similar to the similarly different people, but even less similar to the different people join the group.
4. It goes on this way for a while. Since there are necessarily more people who are mainstream than different, new generations of new users may be less and less like the core group.
5. The group of different people begins to feel alienated with the new people who are joining.
6. The group of different people begin to ignore the new people.
7. The new people form their own culture with one another, excluding old people, because the old people are ignoring them.
8. Old people begin to anticipate alienation and start to see new users through tinted lenses, expecting annoyance.
9. New people feel alienated by the insulting misinterpretations that are caused by the expectation that they're going to be annoying.
10. The unwelcoming environment selects for thick-skinned people. A higher proportion of people like trolls, leaders, spammers, debate junkies, etc are active.
11. Enough new people who are ignored and failed to acculturate accumulate, resulting in a new majority. If trolls are kept under control, the new culture will be a watered down version of the original culture, possibly not much different from mainstream culture. If not, see the final possibility.
12. If a critical mass of trolls, spammers and other alienating thick-skinned types is reached due to an imbalance or inadequate methods of dealing with them, they might ward off old users, exacerbating the imbalance that draws a disproportionate number of thick-skinned types in a feedback loop and then take over the forum. (Why fourchan /b isn't known for having sweet little girls and old ladies.)
Is LessWrong at risk?
1. Eliezer has written about rationalists being infiltrated by main-streamers who don't get it, aka "poseurs".
Eliezer explains in Undiscriminating Skeptics that he can no longer determine who is a rationalist based on how they react to the prospect of religious debates, and now he has to determine who is a rationalist based on who is thinking for themselves. This is the exact same problem other subcultures have - they say the new people aren't thinking for themselves. We might argue "but we want to spread the wonderful gift of rational thought to the mainstream!" and I would agree with that. However, if all they're able to take away from joining is that there are certain things skeptics always believe, all they'll be taking away from us is an appeal to skepticism. That's the kind of thing that happens when subcultures are over-run by mainstream folks. They do not adopt the core values. Instead, they run roughshod over them. If we want undiscriminating skeptics to get benefits from refining the art of rationality, we have to do something more than hang out in the same place. Telling them that they are poseurs doesn't work for subcultures, and I don't think Eliezer telling them that they're undiscriminating skeptics will solve the problem. Getting people to think for themselves is a challenge that should not be undertaken lightly. To really get it, and actually base your life on rationality, you've either got to be the right type, a "natural" who "just gets it" (like Eliezer who showed signs as a child when he found a tarnished silver amulet inscribed with Bayes's Theorem) or you have to be really dedicated to self-improvement.
2. I have witnessed a fast-growing forum actually go exponential. Nothing special was being done to advertise the forum.
Obviously, this risks deluging old members in a sea of newbies that would be large enough to create a newbie culture and form a new majority.
3. LessWrong is growing fast and it's much bigger than I think everyone realizes.
I made a LessWrong growth bar graph showing how LessWrong has gained over 13,000 members in under 3 years (Nov 2009 - Aug 2012). LessWrong had over 3 million visits in the last year. The most popular post has gotten over 200,000 views. Yes I mean there are posts on here that are over 1/5 of their way to a million views, I did not mistype. This is not a tiny community website anymore. I see signs that people are still acting that way, like when people post their email addresses on the forum. People don't seem to realize how big LessWrong has gotten. Since this happened in a short time, we should be wondering how much further it will go, and planning for the contingency that could become huge.
4. LessWrong has experienced at least one wild spike in membership. Spikes can happen again.
We can't control the ups and downs in visitors to the site. That could happen again. It could last for longer than a month. According to Vladmir, using wget, we've got something like 600 - 1000 active users posting per month. We've got about 300 users joining per month from the registration statistics. What would happen if we got 900 each month for a few months in a row? A random spike could conceivably overwhelm the members.
5. Considering how many readers it has, LessWrong could get Slashdotted by somebody big.
If you've ever read about the Slashdot effect, you'll know that all it might take to get a deluge bigger than we can handle is to be linked to by somebody big. What if Slashdot links to LessWrong? Or somebody even bigger? We have at least one article on LessWrong that got about half as many visits as a hall of fame level Slashdot article. The article "Scientologists Force Comment Off Slashdot" got 383692 visits on Slashdot, compared with LessWrong's most popular article at 211,000 visits. (Cite: Slashdot hall of fame.) LessWrong is gaining popularity fast. It's not a small site anymore. And there are a lot of places that could Slashdot us. I may be just a matter of time before somebody pays attention, does an article on LessWrong, and it gets flooded.
6. We all want to grow LessWrong, and people may cause rapid growth before thinking about the consequences.
What if people start growing LessWrong and wildly succeed? I would like to be helping LessWrong grow but I don't want to do it until I feel the culture is well-protected.
7. Some combination of these things might happen and deluge old people with new people.
Does LessWrong need additional eternal September protection?
Lukeprog's main argument is that we don't have to worry about eternal September because we have vote downs. Here's why vote downs are not going to protect LessWrong:
1. If the new to old user ratio becomes unbalanced, or the site is filled with main streamers who take over the culture, who is going to get voted down most? The new users, or the old ones? The old members will be outnumbered, so it will likely be old members.
2. This doesn't prevent new users from interacting primarily with new users. If enough people join, there may not be enough old users doing vote downs to discourage them anymore. That means if the new to old user ratio were to become unbalanced, new users may still interact primarily with new users and form their own, larger culture, a new majority.
3. Let's say Fourchan /b decides to visit. A hundred trolls descend upon LessWrong. The trolls, like everybody else, have the ability to vote down anything they want. The trolls of course will enjoy harassing us endlessly with vote downs. They will especially enjoy the fact that it only takes three of them to censor somebody. They will find it a really, really special treat that we've made it so that anybody who responds to a censored person ends up getting points deducted. From a security perspective, this is probably one of the worst things that you could do. I came up with an idea for a much improved vote down plan.
Possibly more important: What happens if we DO prevent an eternal September?
What we are deciding here is not simply "do we want to protect this specific website from cultural collapse?" but "How do we want to introduce the art of refining rationality to the mainstream public?"
Why do main streamers deluge new cultures and what happens after that? What do they get out of it? How does it affect them in the long-term? Might being deluged by main streamers make it more likely for main streamers to become better at rational thought, like a first taste makes you want more?
If we kept them from doing that, what would happen, then?
Say we don't have a plan. LessWrong is hit by more users than it can handle. Undiscriminating skeptics are voting down every worthwhile disagreement. So, as an emergency measure, registrations are shut off, the number of visits to the website grows and then falls. We succeed in keeping out people who don't get it. After it has peaked, the fad is over. Worse, we've put them off and they're offended. Or, we don't shut off registrations, we're deluged, and now everyone thinks that a "rationalist" an "undiscriminating skeptic". We've lost the opportunity to get through to them, possibly for good. Will they ever become more rational? LessWrong wants to make the world a more rational place. An opportunity to accomplish that goal could happen. Eliezer figured out a way to make rationality popular. Millions of people have read his work. This could go even bigger.
This is why I suggested two discussion areas - then we get to keep this culture and also have an opportunity to experiment with ways for the people who are not naturals at it to learn faster. If we succeed in figuring out how to get through to them, we will know that the deluge will be constructive, if one happens. Then, we can even invite one on purpose. We can even advertise for that and I'd be happy to help. But if we don't start with eternal September protection, we could lose all this progress, lose our chance to get through to the mainstream, and pass like a fad.
For that reason, even if eternal September doesn't look likely to you after everything that I've explained above, I say it is still worthwhile to develop a tested technique to preserve LessWrong culture against a deluge and get through to those who are not naturals. Not doing so takes a risk with something important.
Please critique.
Your honest assessments of my ideas are welcome, always.
Preventing discussion from being watered down by an "endless September" user influx.
In the thread "LessWrong could grow a lot, but we're doing it wrong.", I explained why LessWrong has the potential to grow quite a lot faster in my opinion, and volunteered to help LessWrong grow. Of course, a lot of people were concerned about the fact that a large quantity of new members will not directly translate to higher quality contributions or beneficial learning and social experiences in discussions, so I realized it would be better to help protect LessWrong first. I do not assume that fast growth has to cause a lowering of standards. I think fast growth can be good if the right people are joining and all goes well (specifics herein). However, if LessWrong grows carelessly, we could be inviting an "Endless September", a term used to describe a never ending deluge of newbies that "degraded standards of discourse and behavior on Usenet and the wider Internet" (named after a phenomenon caused by an influx of college freshmen). My perspective on this is that it could happen at any time, regardless of whether any of us does anything. Why do I think that? LessWrong is growing very fast and could snowball on it's own. I've seen that happen, I saw it ruin a forum. That site wasn't even doing anything special to advertise the forum that I am aware of. The forum was just popular and growth went exponential. For this reason, I asked for a complete list of LessWrong registration dates in order to make a growth chart. I received it on 08-23-2012. The data shows that LessWrong has 13,727 total users, not including spammers and accounts that were deleted. From these, I have created a LessWrong growth bar graph:

Each bar represents a one month long total of registration dates (the last bar is a little short, being that it only goes up until the 23rd). The number of pixels in each bar is equal to the number of registrations each month. The first (leftmost) bar that hits the top of the picture (it actually goes waaaay off the page) mostly represents the transfer of over 2000 accounts from Overcoming Bias. The right bar that goes off the page is so far unexplained for me - 921 users joined in September 2011, more than three times the number in the months before and after it. If you happen to know what caused that, I would be very interested in finding out. (No, September 2010 does not stand out, if you were wondering the same thing). If anyone wants to do different kinds of analysis, I can generate more numbers and graphs fairly easily.
As you can see, LessWrong has experienced pretty rapid growth.
Growth is in a downward trend at the moment, but as you can see from the wild spikes everyplace, this could change any time. In addition to LessWrong growing on it's own, other events that could trigger an "endless September" effect are:
LessWrong could be linked to by somebody really big (see: Slashdot effect on Wikipedia).
LessWrong could end up on the news after somebody does something news worthy or because a reporter discovers LessWrong culture and finds it interesting or weird.
(A more detailed explanation is located here.)
For these reasons, I feel it is a good idea to begin constructing endless September protection, so I have volunteered some of my professional web services to get it done. This has to be done carefully because if it is not done right, various unwanted things may happen. I am asking for any ideas or links to ideas you guys have that you think were good and am laying out my solutions and the pitfalls I have planned for below in order to seek your critiques and suggestions.
Cliff Notes Version:
I really thought this out quite a bit because I think it's going to be tricky and because it's important. So I wrote a cliff notes version of the below solution ideas with pros and cons for each which is about a tenth the size.
The most difficult challenge and my solution:
People want the site to be enriching for those who want to learn better reasoning but haven't gotten very far yet.
People also want an environment where they can get a good challenge, where they are encouraged to grow, where they can get exposed to new ideas and viewpoints, and where they can get useful, constructive criticism.
The problem is that a basic desire all humans seem to share is a desire to avoid boredom. There is possibly a survival reason for this: There is no way to know everything, but missing even one piece of information can spell disaster. This may be why the brain appears to have evolved built-in motivators to prod you to learn constantly. From the mild ecstasy of flow state (cite Flow: The psychology of peak experiencing) to tedium, we are constantly being punished and rewarded based on whether we're receiving the optimal challenge for our level of ability.
This means that those who are here for a challenge aren't going to spend their time being teachers for everybody who wants to learn. Not everyone has a teacher's personality and skill set to begin with, and some people who teach do it as writers, explaining to many thousands, rather than by explaining it one-to-one. If everyone feels expected to teach by hand-holding, most will be punished by their brains for not learning more themselves, and will be forced to seek a new learning environment. If beginners are locked out, we'll fail at spreading rationality. The ideal is to create an environment where everyone gets to experience flow, and no one has to sacrifice optimal challenge.
To make this challenge a bit more complicated, American culture (yes, a majority of the visits, 51.12%, are coming from the USA - I have access to the Google Analytics) can get pretty touchy about elitism and anti-intellectualism. Even though the spirit of LessWrong - wanting to promote rational thought - is not elitist but actually inherently opposite to that (to increase good decision making in the world "spreads the wealth" rather than hoarding it or demanding privileges for being capable of good decisions), there is a risk that people will see this place as elitist. And even though self-improvement is inherently non-pretentious (by choosing to do self-improvement, you're admitting that you've got flaws), undoubtedly there will be a large number of people who might really benefit from learning here but instead insta-judge the place as "pretentious". Interpreting everything intellectual as pretentious and elitist is an unfortunate habit in our culture. I think, with the right wording on the most prominent pages (about us, register, home page, etc.) LessWrong might be presented as a unique non-elitist, non-pretentious place.
For these reasons, I am suggesting multiple discussion areas that are separated by difficulty levels. Presenting them as "Easy and Hard" will do three things:
1. Serve as a reminder to those who attend that it's a place of learning where the objective is to get an optimal challenge and improve as far as possible. This would help keep it from coming across as pretentious or elitist.
2. Create a learning environment that's open to all levels, rather than a closed, elitist environment or one that's too daunting. The LessWrong discussion area is a bit daunting to users, so it might be really desirable for people to have an "easy" discussion area where they can learn in an environment that is not intimidating.
3. Give us an opportunity to experiment with approaches that help willing people learn faster.
Endless September protection should be designed to avoid causing these side-effects:
Creating an imbalance in the proportion of thick-skinned individuals to normal individuals.
Anything that annoys, alienates or discourages users is going to deter a lot of people while retaining thick-skinned individuals. Some thick-skinned individuals are leaders, but many are trolls, and thick-skinned individuals may be more likely to resist acculturation or try to change the culture (though it could be argued the other way - that their thick skin allows them to take more honest feedback). For example: anonymous, unexplained down votes create a gauntlet for new users to endure which selects for a high tolerance to negative feedback. This may be the reason it has been reported that there are a lot of "annoying debater types".
People that we do want fail to join because the method of protection puts them off.
There are two pitfalls that I think are going to be particularly attractive, but we should really avoid them:
1.) Filtering into hard/easy based on anything other than knowledge about rational thinking. There are various reasons that could go very wrong.
- Filtering in any other way will keep out advanced folks who may have a lot to teach.
If a person has already learned good reasoning skills in some other way, do we want them at the site? There might be logic professors, Zen masters, debate competition champs, geniuses, self-improvement professionals, hard-core bookworms and other people who are already advanced and are interested in teaching others to improve their skills, or interested in finding a good challenge, or are interested in contributing articles, but have already learned much of the material the sequences cover. Imagine that a retired logic professor comes by hoping to get a challenge from similarly advanced minds and perhaps do a little volunteer work teaching about logic as a past time. Now imagine requiring them to read 2,000 pages of "how to think rationally" in order to gain access to all the discussion areas. This will almost guarantee that they go elsewhere.
- Filtering based on the sequences or other cultural similarities would promote conformity and repel the true thinkers.
If true rationalists think for themselves, some of them will think differently, some of them will disagree. Eliezer has explained in undiscriminating skeptics that "I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief." he defines this as "It has to be something that most of their social circle doesn't believe, or something that most of their social circle does believe which they think is wrong." If we want people in the "hard" social group who are likely to hold and defend non-mainstream beliefs, we have to filter out people unable to defend beliefs without scaring off those who have beliefs different from the group.
2.) Discouraging people with unusually flawed English from participating at all levels. Doing that would stop two important sources of new perspectives from flowing in:
- People with cultural differences, who may bring in fresh perspectives.
If you're from China, you may want to share perspectives that could be new and important to a Westerner, but may be less likely to meet the technical standards of a perfectionist when it comes to writing in English.
- People with learning differences, whose brains work differently and may offer unique insight.
A lot of gifted people have learning disorders and gifted people who don't tend to have large gaps between skill levels. It is not uncommon to find a gifted person whose abilities with one skill are up to 40% behind (or better than) their abilities in other areas. This phenomenon is called "asynchronous development". We associate spelling and grammar with intelligence, but the truth is that those who have a high verbal IQ may not have equally intelligent things to say, and people who word things crudely due to asynchronous development (engineers, for instance, are not known for their communication skills but can be brilliant at engineering) may be ignored even though they could have important things to say. Dyslexics, who have all kinds of trouble from spelling to vocabulary to arranging sentences oddly may be ignored despite the fact that "children and adults who are dyslexic usually excel at problem solving, reasoning, seeing the big picture, and thinking out of the box" (Yale).
Everyone understands the importance of making sure all the serious articles get published with good English, but frequently in intellectual circles, the attitude is that if you aren't a perfectionist about spelling and grammar, you're not worth listening to at all. The problem of getting articles polished when they are written by dyslexics or people for whom English is a second language should be pretty easy - people with English problems can simply seek a volunteer editor. The ratio of articles being published by these folks versus the number of users at the site encourages me to believe that these guys will be able to find someone to polish their work. Since it would be so easy to accommodate for these disabilities, taking an attitude that puts form over function as a filter would not serve you well. If dyslexics and people with cultures different from the majority feel that we're snobby about technicalities, they could be put off. This could already be happening and we could be missing out on the most creative and most different perspectives this way.
People who qualify under the "letter" of the standards do not meet the spirit of the standards.
For instance: They claim to be rationalists because they agree with a list of things that rationalists agree with, but don't think for themselves, as Eliezer cautions about in undiscriminating skeptics. Asking them questions like "Are you an atheist?" and "Do you think signing up for cryo makes sense?" would only draw large numbers of people who agree but do not think for themselves. Worse, that would send a strong message saying: "If you don't agree with us about everything, you aren't welcome here."
The right people join, but acculturate slowly or for some reason do not acculturate.
- Large numbers of users, even desirable ones, will be frustrating if newbie materials are not prominently posted.
I was very confused and disoriented as a new user. I think that there's a need for an orientation page. I wrote about my experiences as a new user here which I think might make a good starting point for such a new user orientation page. I think LessWrong also needs a written list of guidelines and rules that's positioned to be "in your face" like the rest of the internet does (because if users don't see it where they expect to find it, then they will assume there isn't one). If new users adjust quickly, both old users and new users will be less annoyed if/when lots of new users join at once.
The filtering mechanism gives LessWrong a bad name.
For instance, if we were to use an IQ test to filter users, the world may feel that LessWrong is an elitist organization. Sparking an anti-intellectual backlash would do nothing to further the cause of promoting rationality, and it doesn't truly reflect the spirit of bringing everyone up, which is what this is supposed to do. Similarly, asking questions that may trigger racial, political or religious feelings could be a bad idea - not because they aren't sources of bias, but because they'll scare away people who may have been open to questioning and growing but are not open to being forced to choose a different option immediately. The filters should be a test about reasoning, not a test about beliefs.
Proposed Filtering Mechanisms:
Principle One: A small number of questions can deter a lot of activity.
As a web pro, I have observed a 10 question registration form slash the number of files sent through a file upload input that used to be public. The ten questions were not that hard - just name, location, password, etc. Asking questions deters people from signing up. Period. That is why, if you've observed this trend as well, I think that a lot of big websites have begun asking for minimal registration info: email address and password only. Years ago, that was not common, it seemed that everyone wanted to give you ten or twenty questions. For this reason, I think it would be best if the registration form stays simple, but if we create extra hoops to jump through to use the hard discussion area, only those who are seriously interested will join in there. Specific examples of questions that meet the other criteria are located in the proposed acculturation methods section under: A test won't deter ignorant cheaters, but they can force them to educate themselves.
Principle Two: A rigorous environment will deter those who are not serious about doing it right.
The ideal is to fill the hard discussion area with the sort of rationalists who want to keep improving, who are not afraid to disagree with each other, who think for themselves. How do you guarantee they're interested in improving? Require them to sacrifice for improvement. Getting honest feedback is necessary to improve, but it's not pleasant. That's the perfect sacrifice requirement:
Add a check box that they have to click where it says "By entering the hard discussion area, I'm inviting everyone's honest criticisms of my ideas. I agree to take responsibility for my own emotional reactions to feedback and to treat feedback as valuable. In return for their valuable feedback, which is a privilege and service to me, I will state my honest criticisms of their ideas as well, regardless of whether the truth could upset them."
I think it's common to assume that in order to give honest feedback one has to throw manners out the window. I disagree with that. I think there's a difference between pointing out a brutal reality, and making the statement of reality itself brutal. Sticking to certain guidelines like attacking the idea, not the person and being objective instead of ridiculing makes a big difference.
There are other ways, also, for less bold people, like the one that I use in IRL environments: Hint first (sensitive people get it, and you spare their dignity) then be clear (most people get it) then be brutally honest (slightly dense people get it). If I have to resort to the 2x4, then I really have to decide whether enlightening this person is going to be one of those battles I choose or one of those battles I do not choose. (I usually choose against those battles.)
How do you guarantee they're capable of disagreeing with others? Making it clear that they're going to experience disagreements by requiring them to invite disagreements will not appeal to conformists. Those who are not yet thinking for themselves will find it impossible to defend their ideas if they do join, so most of them will become frustrated and go back to the easy discussion area. People who don't want intellectual rigor will be put off and leave.
It's important that the wording for the check box has some actual bite to it, and that the same message about the hard discussion area is echoed in any pages that advise on the rules, guidelines, etiquette, etc. To explain why, I'll tell a little story about an anonymous friend:
I have a friend that worked at Microsoft. He said the culture there was not open to new ideas and that management was not open to hearing criticism. He interviewed with various companies and chose Amazon. According to this friend, Amazon actually does a good job of fulfilling values like inviting honest feedback and creating an environment conductive to innovation. He showed me the written values for each. I didn't think much of this at first because most of them are boring and read like empty marketing copy. Amazon.com has the most incredible written values page I've ever seen - it does more than sit there like a static piece of text. It gives you permission. Instead of saying something fluffy like: "We value integrity and honesty and our managers are happy to hear your criticisms." it first creates expectations for management: " Leaders are sincerely open-minded, genuinely listen, and are willing to examine their strongest convictions with humility." and then gives employees permission to give honest feedback to decision-makers: "Leaders (all employees are referred to as "leaders") are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion." The Amazon values page gives their employees permission to innovate as well: "As we do new things, we accept that we may be misunderstood for long periods of time." If you look at Microsoft's written values, there's no bite to them. What do I mean by bite?
Imagine you're an employee at Amazon. Your boss does something stupid. The cultural expectation is that you're not supposed to say anything - offending the boss is bad news, right? So you're inhibited. But the thing they've done is stupid. So you remember back to the values page and go bring it up on your computer. It says explicitly that your boss is expected to be humble and that you are expected to sacrifice social cohesion in this case and disagree. Now, if your boss gets irritated with you for disagreeing, you can point back to that page and say "Look, it's in writing, I have permission to tell you."
Similarly, there is, what I consider to be, a very unfortunate social skills requirement that more or less says if you don't have something nice to say, don't say anything at all. Many people feel obligated to keep constructive criticism to themselves. A lot of us are intentionally trained to be non-confrontational. If people are going to overcome this lifetime of training to squelch constructive criticism, they need an excuse to ignore that social training. Not just any excuse. It needs to be worded to require them to do that and it needs to be worded to require them to do it explicitly despite the consequences.
Principle Three: If we want innovation, we have to make innovators feel welcome.
That brings me to another point. If you want innovation, you can't deter the sort of person who will bring it to you: the "people who will be misunderstood for long periods of time", as Amazon puts it. If you give specific constructive criticism to a misunderstood person, this will help them figure out how to communicate - how else will they navigate the jungle of perception and context differences between themselves and others? If you simply vote them down, silently and anonymously, they have no opportunity to learn how to communicate with you and what's worse is that they'll be censored after three votes. This ability for three people to censor somebody with no accountability, and without even needing a reason, encourages posters to keep quiet instead of taking the sort of risk an innovator needs to take in presenting new ideas, and it robs misunderstood innovators of those opportunities for important feedback - which is required for them to explain their ideas. Here is an example of how feedback can transform an innovator's description of a new idea from something that seems incomprehensible into something that shows obvious value:
On the "Let's start an important start-up" thread, KrisC posts a description of an innovative phone app idea. I read it and I cannot even figure out what it's about. My instinct is to write it off as "gibberish" and go do something else. Instead, I provide feedback, constructive criticism and questions. It turns out that the idea KrisC has is actually pretty awesome. All it took was for KrisC to be listened to and to get some feedback, and the next description that KrisC wrote made pretty good sense. It's hard to explain new ideas but with detailed feedback, innovation may start to show through. Link to KrisC and I discussing the phone app idea.
Proposed Acculturation Methods:
Send them to Center for Modern Rationality
Now that I have discovered the post on the Center for Modern Rationality and have see that they're targeting the general population and beginners with material for local meetups, high schools and colleges and they're planning some web apps to help with rationality training, I see that referring people over to them might be a great suggestion. Saturn suggested sending them to appliedrationality.org before I found this but I'm not sure if that would be adequate since I don't see a lot of stuff for people to do on their website.
Highlight the culture.
A database of cultural glossary terms can be created and used to highlight those terms on the forum. The terms are already on the page, so what good would this do? Well, first they can be automatically linked to the relevant sequence or wiki page. If old users do not have to look for the link, this speeds up the process of mentioning them to new users quite a lot. Secondly, it would make the core cultural items stand out from all of the other information, which will likely cause new users to prioritize it. Thirdly, there will be a visual effect on the page. You'll be able to see that this place has it's own vocabulary, it's own personality, it's own memes. It's one thing to say "LessWrong has been influenced by the sequences" to a new user who hasn't seen all those references on all of those pages, and even if they do see them, won't know where they're from, versus making it immediately obvious how by giving them a visual that illustrates the point.
Provide new users with real feedback instead of mysterious anonymous down votes:
We have karma vote buttons, but this is not providing useful feedback for new users. Without a specific reason, I have no way to tell if I'm being down voted by trolls and I may see ten different possible reasons for being voted down and not know which one to choose. This annoyance selects for thick-skinned individuals like trolls and fails to avoid the "imbalance in the proportion of thick-skinned individuals to normal individuals" side-effect.
If good new users are to be preserved, and the normal people to troll ratio is to be maintained, we need to add a "vote to ban" button that's used only for blatant misbehavior, and if an anonymous feedback system is to be used for voting down, it needs to prompt you for more detailed feedback - either allowing you to select from categories, or give at least one or two words as an explanation. Also, the comments need to should show both up votes and down votes. If you don't know when you've said something controversial and are being encouraged to view everything you say as black-and-white good-or-bad, this promotes conformity.
A test won't deter ignorant cheaters, but they can force them to educate themselves.
Questions can be worded in such a way that they serve as a crash course in reasoning in the event that someone posts a cheat sheet or registrants look up all the answers on the internet. Assuming that the answer options are randomly ordered so that you have to actually read them then the test should, at the very least, familiarize them with the various biases and logical fallacies, etc. Examples:
--------------
Person A in a debate explains a belief but it's not well-supported. Their opponent, person B, says they're an idiot. What is this an example of?
A. Attacking the person, a great way to really nail a debate.
B. Attacking the person, a great way to totally fail in debate because you're not even attacking their ideas.
--------------
You are with person X and person Y. Person Y says they have been considering some interesting new evidence of what might be an alien space craft and aren't sure what to think yet. You both see person Y's evidence, and neither of you has seen it before. Person X says to you that they don't believe in UFOs and don't care about person Y's silly evidence. Who is the better skeptic?
Person X because they have the correct belief about UFOs.
Person Y because they are actually thinking about it, avoiding undiscriminating skepticism.
--------------
Note: These questions are intentionally knowledge-based. If the purpose is to avoid requiring an IQ test, and to create an obstacle that requires you to learn about reasoning before posting in "hard", that's the only way that these can be done.
Encouraging users to lurk more.
Vaniver contributed this: Another way to cut down on new-new interaction is to limit the number of comments someone can make in a time period- if people can only comment once an day until their karma hits 20, and then once an hour until their karma hits 100, and then they're unrestricted, that will explicitly encourage lurking / paying close attention to karma among new members. (It would be gameable, unless you did something like prevent new members from upvoting the comments of other new members, or algorithmically keeping an eye out for people gaming the system and then cracking down on them.) [edit] The delay being a near-continuous function of the karma- say, 24 hours*exp(-b karma)- might make the incentives better, and not require partitioning users explicitly. No idea if that would be more or less effort on the coding side.
Cons: This would deter some new users from becoming active users by causing them to lose steam on their initial motivation to join. It might be something that would deter the right people. It might also filter users, selecting for the most persistent ones, or for some other trait that might change the personality of the user base. This would exacerbate the filtering effect that the current karma system is exerting, which, I theorize, is causing there to be a disproportionate number of thick-skinned individuals like trolls and debate-oriented newbies. My theory about how the karma system is having a bad influence
Give older users more voting power.
Luke suggested "Maybe this mathematical approach would work. (h/t matt)" on the "Call for Agreement" thread.
I question, though, whether changing the karma numbers on the comments and posts in any way would have a significant influence on behavior or a significant influence on who joins and stays. Firstly, votes may reward and punish but they don't instruct very well - unless people are very similar, they won't have accurate assumptions about what they did wrong. I also question whether having a significant influence on behavior would prevent a new majority from forming because these are different problems. The current users who are the right type may be both motivated and able to change, but future users of the wrong type may not care or may be incapable of changing. They may set a new precedent where there are a lot of people doing unpopular things so new people are more likely to ignore popularity. The technique uses math and the author claims that "the tweaks work" but I didn't see anything specific about what the author means by that nor evidence that this is true. So this looks good because it is mathematical, but it's less direct than other options so I'm questioning whether it would work.
Vladimir_Nesov posted a variation here.
Make a different discussion area for users with over 1000 karma.
Make a Multi Generation Culture.
Limit the number of new users that join the forum to a certain percentage per month, sending the rest to a new forum. If that forum grows too fast, create additional forums. This would be like having different generations. New people would be able to join an older generation if there is space. Nobody would be labeled a "beginner".
Temporarily turn off registration or limit the number of users that can join.
(See the cliff notes version for more.)
Should easy discussion participants be able to post articles?
I think the answer to this is yes, because no filtering mechanism is perfect and the last thing you want to do is filter out people with a different and important point of view. Unless the site is currently having issues with trolls posting new articles, or with the quality of the articles going down, leaving that freedom intact is best. I definitely think, though, that written guidelines for posting an article need to be put in "in your face" expected places. If a lot of new users join at once, well-meaning but confused people will be posting the wrong sorts of things there - making sure they've got the guidelines right there is all that's probably needed to deter them.
Testing / measuring results:
How do we tell if this worked? Tracking something subjective like whether we're feeling challenged or inundated with newbies is not going to be a straightforward matter of looking at numbers. (Methods to assist willing people learn faster deserves it's own post.) Just because it's subjective doesn't mean tracking is impossible or that working out whether it's made a difference cannot be done. I suspect that a big difference will be noticed in the hard discussion area right away. Here are some figures that are relevant and can be tracked, that may give us insight and ways to check our perceptions:
1. How many people are joining the hard forum versus the easy forum? If we've got a percentage, we know how *much* we've filtered, though we won't know exactly *who* we've filtered.
2. Survey the users to ask whether the conversations they're reading have increased in quality.
3. Survey the users to ask whether they've been learning more since the change.
4. See which area has the largest ratio of users with lots of vote downs.
(This could be tricky because people who frequently state disagreements might be doing a great service to the group, but might be unpopular because of it, and people who are innovative may be getting voted down due to being misunderstood. One would think, though, that people who are unpopular due to disagreeing, or being innovative, assuming they're serious about good reasoning, would end up in the hard forum.)
Request for honest feedback:
Your honest criticisms of this idea and your suggestions will be appreciated, and I will update this idea or write a new one to reflect any good criticisms or ideas you contribute.
This is in the public domain:
This idea is hereby released into the public domain, with acknowledgement from Luke Muehlhauser that those were my terms prior to posting. My intent is to share this idea to make it impossible to patent and my hope is that it will be free for the whole world to use.
Preventing discussion from being watered down by an "endless September" user influx. by Epiphany is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
LessWrong could grow a lot, but we're doing it wrong.
How do I know this? I got a copy of the website analytics.
The bounce rate for LessWrong's home page is 60%!
To be clear: Over half the people who visit LessWrong are going away without even clicking anything.
Yet how many NEW visitors are there? Almost half of the visitors are new!
Granted, new visitor statistics aren't perfect, but that's a LOT of people.

Simple math should tell us this:
If we got the bounce rate down around 30% (a reasonable rate for a good site) by making sure every visitor sees something awesome immediately, AND make sure that each visitor can quickly gauge how much they're going to relate to the community (assuming the new users are the right target audience), it would theoretically double the rate of growth, or more. There's a multiplier effect if the bounce rate is improved: you get better placement in search engines. Search engines get more users if they feel that the engine finds interesting content, not just relevant content.
It's been argued that it's possible that most of the bounces are returning visitors checking for new content. Well if half the visitors to the site each month are new, and we did a wonderful job of showing them that LessWrong is awesome, then the amount of returning visitors could double each month. We're getting a tiny, tiny fraction of that growth:

http://www.sitemeter.com/?a=stats&s=s18lesswrong&r=36
Why did I write you guys so much in the home page rewrites thread? Because I am a web professional who works with web marketing professionals at my job and to me it was blatantly obvious that there's that much room for improvement in the growth of LessWrong. Doing changes like the ones I suggested wouldn't even take long. Because I like this site, and I knew it had potential to grow by leaps and bounds if somebody just paid a little bit of attention to real web marketing. Because I was confused when I first found this site - I had no idea what it's about, or why it's awesome. I closed the home page, myself. Another friend mentioned LessWrong. Curiosity perked up again. I came back and read the about page. That didn't make things clearer either. I left again without going further. Friends kept telling me it was awesome. I came back one day and finally found an awesome article! It took me three tries to figure out why you guys are awesome because the web marketing is so bad. The new proposals, although they are well-meaning and it's obvious that John_Maxwell_IV cares about the site, are more of the same bad marketing.
I've been interested in web marketing for ten years. It's a topic I've accumulated a lot of information about. As I see it, the way these guys are going about this is totally counter-intuitive to web basic marketing principles. They don't even seem to know how harsh users are the first time they see a new website. They tend to just go away if it doesn't grab them in a few seconds. They're like "well we will put interesting links in" but that's not how it works! The links don't make the site interesting - the site has got to be interesting enough for users to click the links. Thinking the links will make the site interesting is backward. If you want to improve your bounce rate, your goal is to be awesome immediately in order to get the user to stay on the page long enough to want to click your link. If it wasn't usually hard to get users to click links, we wouldn't track bounce rates. These guys know this particular group of users better than I do, but I know web marketing principles that they're not even seeing when pointed out. To me, they seem to be totally unaware of the field of web marketing. The numbers don't lie and they're saying there's huge room for improvement.
If you want to grow, it's time to try something different.
Here's a thought: There is a lot awesome content that's on this website. We need to take what's awesome and make it in-your-face obvious. I wrote a plan for how to quickly find the most effective awesome content (the website statistics will tell you which pages keep new visitors on them the longest), and how to use them to effectively get the attention of new users - copy the first paragraph from one of those pages, which was most likely constructed by a competent writer in a way that hooks people (if it's keeping them on the page then it's essentially proven to!) and place that as bait right on the front page. (There is also a wrong way to do this.) Then of course, the user needs to find out why the LessWrong community might be a place where they belong. I shared ideas for that in "About us - Building Interest".
Don't let's assume that growth is going to be good. You're going to get more internet trolls, more spam, (there's a way to control spam which I would be happy to share) and more newbies who don't know what they're doing (I provided some suggestions to help get them on track quickly, preventing annoyance for both you and them). There will be people with new ideas, but if the wrong audience is targeted... well. We'd better choose what audience to target. I saw an internet forum take off once - it seemed to be growing slowly, until we looked at the curve and saw that it was exponential. That of course quickly turned to a dazzling exponential curve. Suddenly the new users outnumbered the old ones. That could happen here -- even if we do nothing. YOU can get involved. YOU can influence who to target. They're taking suggestions on rewrites right now. Go to the thread. I invite brutal honesty on everything I wrote there. Or pick my brain, if you'd prefer.
What do you want, LessWrong? Do you want to grow optimally? Who do you want to see showing up?
Enjoy solving "impossible" problems? Group project!
In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.
How do you prove something is impossible? You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method. You might prove that all the methods you know about do not work. That doesn't prove there's not some other option you don't see. "I don't see an option, therefore it's impossible." is only an appeal to ignorance. It's a common one but it's incorrect reasoning regardless. Think about it. Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long.
I say: "Then Look!"
How often do we push past this feeling to keep thinking of ideas that might work? For many, the answer is "never" or "only if it's needed". The sense that something is impossible is subjective and fallible. If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief. What distinguishes this from bias?
I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible. This is valid, but it's completely missing the obvious: As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work. The hard part is THINKING of a plan to do the impossible. I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one. Not only that, I think we're capable of doing this on a worthwhile topic. An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.
Here's how I am going to proceed:
Step 1: Come up with a bunch of impossible project ideas.
Step 2: Figure out which one appeals to the most people.
Step 3: Invent the methodology by which we are going to accomplish said project.
Step 4: Improve the method as needed until we're convinced it's likely to work.
Step 5: Get the project done.
Impossible Project Ideas
- Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster. Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety"). My ideas.
- Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it.
- Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities.
- Rational Agreement Software: If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree? This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top. This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
- Discover unrecognized bias: This is especially hard since we'll be using our biased brains to try and detect it. We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
- Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.
Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.
Figure out which one appeals to the most people.
Assuming each idea is put into a separate comment, we can vote them up or down. If they begin with the word "Idea" I'll be able to find them and put them on the list. If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.
Number of Members on LessWrong
I was excited to find this site, so I wanted to know how many people had joined LessWrong. Was it what it seemed - that a lot of people had actually gathered around the theme of rational thought - or was that just wishful thinking about a site that a guy with a neat idea and his buddies put together? I couldn't find anything stating the number of members on LessWrong anywhere on the site or the internet, so I decided it would be a fun test of my search engine knowledge to nail jello to a tree and make my own.
Some argue that Google totals are completely meaningless, however, the real problem is that it's very complicated and if you don't know how search engines work, your likelihood of getting a usable number is low. I took into account the potential pitfalls when MacGyvering this figure out of Google. So far, no one has posted a significant flaw with my specific method. (I will change that statement if they do, once I've read their comment.) Also, I was right (Find in page: total).
Here is the query I constructed:
site:lesswrong.com/user -"submitted by" -"comments by"
(Translation provided at the end.)
This gets a similar result in Bing and Yahoo:
"lesswrong.com/user"
If this is correct, LessWrong has over 9,000 members. That's my claim: "LessWrong probably has over 9,000 members" not "LessWrong has exactly 9,000 members". My LessWrong population figure is likely to be low. (I explain this below.)
Why did I do this? I was really overjoyed to find this site and wanted to see whether it was somebody's personal site with just a few buddies, or if they actually managed to draw a significant gathering of people who are interested in rational thought. I was very happy to see that it looks much bigger than a personal site. Since it was so hard to find out how many users LessWrong has, I decided to share.
I think a lot of people assume the hasty generalization that "all search engine totals are meaningless". If you're an average user just plugging in search terms with little understanding of how search engines work: yes, you should regard them as meaningless. However, if you know the limitations of a technique, what parts of the system your working within are consistent and what parts of it are not, I say it is possible to get some meaning within those limitations. Do I know all the limitations? Well, I assume I am unaware of things I don't know, so I won't say that. But I do know that so far nobody has proven this number or method wrong. If you want to prove me wrong, go for it. That would be fascinating. Remember that the claim is "LessWrong probably has over 9,000 members". The entire purpose of this was to get an "at least this many" figure for how many members LessWrong has. The inaccuracies I've already taken into consideration in order to compensate for the limits of this technique are listed below:
Why this is an "at least this many" figure, pitfalls I've avoided or addressed, and inaccuracies.
- Some users may not be included in Google's index yet. For instance, if they have never posted, there may be no link to their page (which is what I searched for - user pages), and the spider would not find them. This may be restricted to members that have actually commented, posted, or have been linked to in some way somewhere on the internet.
- Search engine caches are not in real time. There can be a lag of up to months, depending on how much the search engine "likes" the page.
- It has been reported by previous employees of a major search engine that they are using crazy old computer equipment to store their caches. I've been told that it is common for sections of cache to be down for that reason.
- Search engines have restrictions in place to conserve resources. For instance, they won't let you peruse all of the results using the "next" button, and they don't total all of the results that they have when you first press "search" (you may see that number increase later if you continue to press "next" to see more pages of results.)
- It has been argued that Google doesn't interpret search terms the way you'd think. I knew that before I started. The query was designed with that in mind. I explain that here: http://lesswrong.com/r/discussion/lw/e4j/number_of_members_on_lesswrong/780g
- Some of the results in Bing and Yahoo were irrelevant, though I think I weeded them pretty thoroughly for Google if my random samples of results pages are a good indication of the whole.
- When you go to your user page, if you have more than 10 comments, a next link shows at the bottom and clicking it makes more pages appear. My understanding is that Google doesn't index these types of links - and they don't seem to be getting included. http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/7839
Go ahead and check it out - stick the query in Google and see how many LessWrong members it shows. You'll certainly get a more up-to-date total than I have posted here. ;)
Translation for those of you that don't know Google's codes:
site:lesswrong.com/user
"Search only lesswrong.com, only the user directory."
(The user directory is where each user's home page is, so I'm essentially telling it "find all the home page directories".)
-"submitted by" -"comments by"
Exclude any page in that directory with the exact text "submitted by" or "comments by"
(The submissions and comments pages use a url in that directory, so they will show up in the results if I do not subtract them. Also, I used exact text specific to those pages, so that the text in the links on user home pages do not get user home pages omitted from the search. )
Note:
I realize this number isn't scientific proof of anything, (we can't see Google's code so that would be foolish), which is why I'm not attempting to use it to convince anyone of anything important.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)