Various people raised concerns that growth might ruin the culture after reading my "LessWrong could grow a lot" thread.  There has been some discussion about whether endless September, a phenomenon that kills online discussion groups, is a significant threat to LessWrong and what can be done.  I really care about it, so I volunteered to code a solution myself for free if needed.  Luke invited debate on the subject (the debate is here) and will be sent the results of this poll and asked to make a decision.  It was suggested by him in an email that I wait a little while and then post my poll (meta threads are apparently annoying to some, so we let people cool off).  Here it is, preceded by a Cliff's notes summary of the concerns.


Why this is worth your consideration:

 - Yvain and I checked the IQ figures in the survey against other data this time, and the good news is that it's more believable that the average LessWronger is giftedThe bad news is that LessWrong's IQ average has decreased on each survey.  It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong's average has lost 52% of it's giftedness since March of 2009.

 - Eliezer documented the arrival of poseurs (people who superficially copycat cultural behaviors - they are reported to over-run subcultures) which he termed "Undiscriminating Skeptics".

 - Efforts to grow LessWrong could trigger an overwhelming deluge of newbies.

 - LessWrong registrations have been increasing fast and it's possible that growth could outstrip acculturation capacity. (Chart here)

 - The Singularity Summit appears to cause a deluge of new users that may have similar effect to the September deluges of college freshman that endless September is named after.  (This chart shows a spike correlated with the 2011 summit where 921 users joined that month, which is roughly equal to the total number of active users LW tends to have in a month if you go by the surveys or Vladmir's wget.)

 - A Slashdot effect could result in a tsunami of new users if a publication with lots of readers like the Wall Street Journal (they used LessWrong data in this article) decides to write an article on LessWrong.

 - The sequences contain a lot of the culture and are long meaning that "TLDR" may make LessWrong vulnerable to cultural disintegration.  (New users may not know how detailed LW culture is or that the sequences contain so much culture.  I didn't.)

 - Eliezer said in August that the site was "seriously going to hell" due to trolls.

 - A lot of people raised concerns.

 

Two Theories on How Online Cultures Die:


  Overwhelming user influx.
  There are too many new users to be acculturated by older members, so they form their own, larger new culture and dominate the group.

  Trending toward the mean. 
  A group forms because people who are very different want a place to be different together.  The group attracts more people that are closer to mainstream than people who are equally different because there are more mainstream people than different people.  The larger group attracts people who are even less different in the original group's way for similar reasons.  The original group is slowly overwhelmed by people who will never understand because they are too different.

 

Endless September Poll.


Request for Feedback:

In addition to constructive criticism, I'd also like the following:

  • Your observations of a decline or increase in quality, culture or enjoyment at LessWrong, if any.

  • Ideas to protect the culture.

  • Ideas for tracking cultural erosion.

  • Ways to test the ideas to protect the culture.

 

New Comment
262 comments, sorted by Click to highlight new comments since: Today at 6:14 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So far, I've been more annoyed on LessWrong by people reacting to fear of "cultural erosion" than by any extant symptoms of same.

The fear is that this is due to a selection effect. Of the people I know through LW, a disappointing number have stopped reading the site. One of my hobbies, for over a decade now, has been posting on forums, and so the only way I'd stop reading / posting on LW is if I find a forum more relevant to my interests. (For the curious, I've moved from 3rd edition D&D to xkcd to here over that timeframe, and only post in xkcd's MLP and gaming threads these days.) For many of the former LWers I know, forum-posting isn't one of their hobbies, and they came here for the excellent content, primarily by EY. Now that there aren't blog posts that they want to read frequently enough, they don't come, and I'm not sure that any of them even know that EY has started posting a new sequence.

I think that this fear is mostly misplaced, because the people in that class generally aren't the people posting the good content, and I think any attempt to improve LW should be along the lines of "more visible good content" and not "less bad content," but it's important for evaporative cooling reasons to periodically assess the state of content on LW.

9David_Gerard11y
Not only do communities have a life cycle, people's membership in communities does. People give all sorts of reasons for leaving a community (e.g. boredom, other interests, deciding the community is full of assholes, an incident they write over one megabyte of text complaining about), but the length of participation is typically 12 to 18 months regardless. Anything over that, you're a previous generation. So I wouldn't be disappointed unless they stopped before 12-18 months.
2Epiphany11y
I wonder if it would make a big difference to add email notifications... perhaps the type where you only receive the notification when something over X number of karma is posted? That would keep users from forgetting about the site entirely. And draw more attention and karma (aka positive reinforcement) to those who post quality things. Hmm that would also keep older users logging in, which would help combat both trending toward the mean and new users outstripping acculturation capacity.
0NancyLebovitz11y
I think that would bring back only the most marginally interested users, and would be likely to annoy a good many people who'd drifted away. Notification of posts with karma above a chosen threshold might be better. For that matter, a customizable LW which you could choose to only see posts with karma above a threshold might be good. It would be even better if posts could also be selected/deselected by subject, but that sounds like a hard problem.
2Viliam_Bur11y
Why not both? Speaking for myself, a lot of bad content would make me less likely to post good content. My instincts tell me -- if other people don't bother here with quality, why should I?
0Vaniver11y
I separate those because I think the second is a distraction. It seems to me that the primary, and perhaps only, benefit from reducing bad content is increasing the visibility of good content. It still seems like there are incentives- better posts will yield more karma- and I suspect it matters who the other people who don't bother are. Right now, we have spammers (particularly on the wiki) who don't bother at all with being helpful. Does that make you more likely to post commercial links on the wiki? If everyone you thought was more insightful than you stopped bothering to write posts and comments, then it seems likely that you would wonder what the benefit to putting more effort into LW was. More high quality posts seems useful as an aspirational incentive.
4Viliam_Bur11y
I am annoyed by both. Not enough to consider leaving this weedy garden yet.
0pleeppleep11y
The possibility still warrants consideration, even if it isn't actively harming the site
1FeepingCreature11y
I think the idea is that considering the possibility is actively harming the site. Which would be .. problematic.
9pleeppleep11y
If we've gotten to the point where we refuse to think because we're afraid of where it'll lead, then this place really is dead.
1Luke_A_Somers11y
That granted, there are conversations it's not worth having.
0pleeppleep11y
probably, but this isn't one of them.
0Luke_A_Somers11y
I suspect you are correct, and if it doesn't go very very badly indeed, I am very confident of it.
-2Kindly11y
Same here, but I have no clue how to address this problem. I suspect making discussion posts complaining about people complaining about cultural erosion would be the wrong approach.
0Epiphany11y
(nevermind)

I think this site is dying because there's nothing interesting to talk about anymore. Discussion is filled with META, MEETUP, SEQ RERUN, links to boring barely-relevant articles, and idea threads where the highest comment has more votes than the thread itself (i.e. a crappy idea). Main is not much better. Go to archive.org and compare (date chosen randomly, aside from being a while ago). I don't think eternal september is the whole explanation here -- you only need 1 good user to write a good article.

Discussion is filled with META, MEETUP, SEQ RERUN, links to boring barely-relevant articles

The website structure needs to be changed. "Main" and "Discussion" simply do not reflect the LW content today.

We should have a separate "Forum" (or some other name) category for all the non-article discussion threads like Open Thread, Media Thread, Group Rationality Thread, and stuff like this.

Then, the "Discussion" should be renamed to "Articles" (and possibly "Main" to "Main Articles") to make it obvious what belongs there.

Everything else should be downvoted; optionally with a comment: "This belongs to the Open Thread". (And if the author says they didn't know that Open Thread exists, there is something seriously wrong... about the structure of the website.)

I feel like I wrote this to the LW discussions at least dozen times...

there's nothing interesting to talk about anymore.

I think there are interesting things here. They are just drowned in too many less interesting things.

Let's look at the numbers: 6 articles so far on Dec 9th; 6 articles on Dec 8th; 4 articles on Dec 7th; 11 articles on Dec 6th; 8 arti... (read more)

One issue with the LW/CFAR approach is that the focus is on getting better/more efficient at pursuing your goals, but not on deciding whether you're applying your newfound superpowers to the right goals. (There's a bit of this with efficient altruism, but those giving opportunities are more about moving people up Maslow's hierarchy of needs, not on figuring out what to want when you're not at subsistence level).

Luke's recent post suggest that almost no one here has the prereqs to tackle metaphysics or normative ethics, but that always has seemed like the obvious next topic for rationality-minded people. I was glad when Luke was writing his Desirism sequences back at CSA, but it never got to the point where I had a decent enough model of what normative claims desirism made to be able to evaluate it.

Basically, I think these topics would let us set our sights a little higher than "Help me optimize my computer use" but I think one major hurdle is that it's hard to tackle these topics in individual posts, and people may feel intimidated about starting sequences.

3Eugine_Nier11y
The problem is that there is an unfortunate tendency here, going all the way up to EY to dismiss philosophy and metaphysics.
1NancyLebovitz11y
It depends-- if the higher-voted comments are expending on the original post, then I'd say the post was successful because it evoked good-quality thought, assuming that the voters have good judgement. If the higher-voted comments are refuting the original post, then it was probably a bad post.
1Epiphany11y
Do you have a theory as to why there aren't enough good users, or why they are not writing good articles?

One possibility is that the kind of content printing-spoon likes is easy to get wrong, and therefore easy to get voted down for, and therefore the system is set up with the wrong incentives (for the kind of content printing-spoon likes). I'd guess that for most users, the possibility of getting voted down is much more salient than the possibility of getting voted up. Getting voted down represents a form of semi-public humiliation (it's not like reddit, where if you post something lame it gets downvoted and consequentially becomes obscure).

The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go.

You and Your Research

See this thread for more: http://lesswrong.com/lw/5pf/what_were_losing/

Overall, I suspect that LW could stand to rely less on downvoting in general as a means of influencing user behavior. It seems like meta threads of this type often go something like "there's content X I hate, content Y I hate, and practically no content at all, really!" Well if you want more content, don't disparage the people writing con... (read more)

1Nominull11y
Well, I tried to make a post once, got downvoted into oblivion, and decided not to put myself through that again. So yeah this happens for real, although perhaps in my case it is no big loss.
1[anonymous]11y
See my comment for one possibility.
-1printing-spoon11y
I'm not sure... I think the topics I find most interesting are simply used up (except for a few open questions on TDT or whatever). Also the recent focus on applied rationality / advice / CFAR stuff... this is a subject which seems to invite high numbers of low quality posts. In particular posts containing advice are generally stuffed with obvious generalizations and lack arguments or evidence beyond a simple anecdote. Also, maybe the regular presence of EY's sequences provided a standard for quality and topic that ensured other people's posts were decent (I don't think many people read seq reruns, especially not old users who are more likely to have good ideas).

The Popular Struggle Committee for Salvation of Less Wrong calls for the immediate implementation of the following measures:

1) Suspension of HPMOR posting until the site has been purged. All new users who join during the period of transition will be considered trolls until proven otherwise. Epiphany to be appointed Minister of Acculturation.

2) A comprehensive ban on meta-discussion. Articles and comments in violation of the ban will be flagged as "meta" by the moderators, and replying to them will incur a "meta toll" of -5 karma. A similar "lol toll" shall apply to jokes that aren't funny.

3) All meetups for the next six months to consist of sixty minutes of ideological self-criticism and thirty minutes of weapons training.

4gwern11y
I second these motions... with a vengeance. For is it not said: and and especially:

The destruction of LW culture has already happened. The trigger was EY leaving, and people without EY's philosophical insight stepping in to fill the void by chatting about their unconventional romantic lives, their lifehacks, and their rational approach to toothpaste. If anything, I see things having gotten somewhat better recently, with EY having semi-returned, and with the rise of the hypercontrarian archconservative clique, which might be wrong about everything but at least they want to talk about it and not toothpaste.

7John_Maxwell11y
Anna Salamon, 2009. So this "destruction" was at least semi-planned.
0Epiphany11y
I read that twice, and went to the post you linked to, and am still not seeing why it supports the idea: Maybe you are viewing optimization related posts as a form of cultural collapse?
0John_Maxwell11y
Nominull seemed to be. I was patterning my use of "destruction" after theirs. I don't see it as destruction myself.
3[anonymous]11y
lulz. Why do I feel identity-feels for that phrase? I should watch out for that, but, That's what I thought a few months ago. Then everything turned inside out and I realized there is no god . What a feeling! Now I see people confidently rationalizing the cultural default, and realize how far we have to go WRT epistemic rationality.
1[anonymous]11y
If EY didn't intend for said "destruction" to happen, he should have chosen a website model more suitable to that end.
0metatroll11y
tl;dr: The following is a non-profit fan-based parody. Less Wrong, the Singularity Institute, and the Centre for Applied Rationality are owned by Hogwarts School, Chancellor Ray Kurzweil, and the Bayesian Conspiracy. Please support the official release. Troll Wrongosphers with Baumeister and Eddington, not Benedict and Evola Wrongosophical trolling should be based on genuinely superior psychological insights ("Baumeister" for breakthroughs in social psychology such as those summarized in Vohs & Baumeister 2010) and on crackpot science that is nonetheless difficult to debunk ("Eddington" for the fundamental theory described in Durham 2006). Starting from reaction and religion, as many trolls still do, both (1) promotes unpleasant ideas like God and conservatism and (2) fails to connect with the pragmatic and progressive sensibility of 21st-century culture. Once young trollosophers are equipped with some of the best newthink and pseudoscience, then let them dominate the subversive paradigm. I'll bet they get farther than the other kind.
[-][anonymous]11y250

Here's two things we desperately need:

  1. An authoritative textbook-style index/survey-article on eveything in LW. We have been generating lots of really cool intellectual work, but without a prominently placed, complete, hierarchical, and well-updated overview of "here's the state of what we know", we arent accumulating knowledge. This is a big project and I don't know how I could make it happen, besides pushing the idea, which is famously ineffective.

  2. LW needs a king. This idea is bound to be unpopular, but how awesome would it be to have someone who's paid job it was to make LW into an awesome and effective community. I imagine things like getting proper studies done of how site layout/design should be to make LW easy to use and sticky to the right kind of people (currently sucks), contacting, coordinating, and encourageing meetup organizers individually (no one does this right now and lw-organizers has little activity), thinking seriously and strategically about problems like OP, and leading big projects like idea #1. Obviously this person would have CEO-level authority.

One problem is that our really high-power agent types who are super dedicated to the community (i... (read more)

LW needs a king.

LW as a place to test applied moldbuggery, right?

Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need.

Kings also suck at it, in the average. Of course, if we are lucky and find a good king... the only problem is that king selection is the kind of strategic decision humans suck at.

8[anonymous]11y
They should be self-selected, then we don't have to rely on the community at large. There's this wonderful idea called "Do-ocracy" where everyone understands that the people actually willing to do things get all the say as to what gets done. This is where benevolent dictators like Linus Torvalds get there power. Our democratic training has taught us to think this idea is a recipe for totalitarian disaster. The thing is, even if the democratic memplex were right in it's injunction against authority, a country and an internet community are entirely different situations. In a country, if you had king-power, you have military and law power as well, and can physically coerce people to do what you want. There is enough money and power at stake to make it so most of the people who want to do the job are in it for the money and power, not the public good. Thus measures like heritable power (at least you're not selecting for power-hunger), and democracy (now we're theoretically selecting for public support). On the other hand, in a small artificial community like a meetup, a hackerspace, or lesswrong, there is no military to control, the banhammer is much less power than the noose or dungeon, and there is barely anything to gain by embezzling taxes (as a meetup organizer, I could embezzle about $30 a month...). At worst, a corrupt monarch could ban all the good people and destroy the community, but the incentive do do damage to the community is roughly "for the lulz". Lulz is much cheaper elsewhere. The amount of damage is highly limited by the fact that, in the absence of military power, the do-ocrat's power over people is derived from respect, which would rapidly fall off if they did dumb things. On the other hand, scope insensitivity makes the apparent do-gooder motivation just as high. So in a community like this, most of the people willing to do the job will be those motivated to do public good and those agenty enough to do it, so self-selection (do-ocracy) works an
6prase11y
I can't speak for your democratic training, but my democratic training has absolutely no problem with acknowledging merits and giving active people trust proportional to their achievements and letting them decide what more should be done. It has become somewhat fashionable here, in the Moldbuggian vein, to blame community failures on democracy. But what particular democratic mechanisms have caused the lack of strategic decisions on LW? Which kind of decisions? I don't see much democracy here - I don't recall participating in election, for example, or voting on a proposed policy, or seeing a heated political debate which prevented a beneficial resolution to be implemented. I recall recent implementation of the karma penalty feature, which lot of LWers were unhappy about but was put in force nevertheless in a quite autocratic manner. So perhaps the lack of strategic decisions is caused by the fact that * there just aren't people willing to even propose what should be done * nobody has any reasonable idea what strategic decision should be made (it is one thing to say what kind of decisions should be made - e.g. "we should choose an efficient site design", but a rather different thing to make the decision in detail - e.g. "the front page should have a huge violet picture of a pony on it") * people aren't willing to work for free Either of those has little to do with democracy. I am pretty sure that if you volunteer to work on whichever of your suggestions (contacting meetup organisers, improving the site design...), nobody would seriously object and you would easily get some official status on LW (moderator style). To do anything from the examples you have mentioned you wouldn't need dictatorial powers.
4Nominull11y
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it's only because an online community is generally less important to people's lives than society at large. A bad king can absolutely destroy an online community. Banning all the good people is actually one of the better things a bad king can do, because it can spark an organized exodus, which is just inconvenient. But by adding restrictions and terrorizing the community with the threat of bans, a bad king can make the good people self-deport. And then the community can't be revived elsewhere.
9[anonymous]11y
I admit, I have seen braindead moderators tear a community apart (/r/anarchism for one). I have just as often seen lack of moderation prevent a community from becoming what it could. (4chan (though I'm unsure whether 4chan is glorious or a cesspool)) And I have seen strong moderation keep a community together. The thing is, death by incompetent dictator is much more salient to our imaginations than death by slow entropy and september-effects. incompetent dictators have a face which makes us take it much more seriously than an unbiased assessment of the threats would warrant.
1Vaniver11y
There's a big difference between exile and prison, and the power of exile depends on the desirability of the place in question.

LW needs a king.

Why “king” rather than “monarch”? Couldn't a queen do that?

Yes, and a queen could move more than one space in a turn, too.

For obvious decision theoretic reasons, a king is necessary. However, the king does not have to be a man.

8Luke_A_Somers11y
Maybe "Princess" would be best, considering everything.
2[anonymous]11y
hmmm.. no It definitely has to be a word that implies current authority, not future authority.
3Luke_A_Somers11y
There is a particular princess in the local memespace with nigh-absolute current authority. edited to clarify: by 'local memespace' I mean the part of the global memespace that is in use locally, not that there's something we have going that isn't known more broadly
2[anonymous]11y
I am getting this "whoosh" feeling but I still can't see it.
8Luke_A_Somers11y
If you image-search 'obey princess', you will get a hint. Note, the result is... an alicorn. But more seriously (still not all that seriously), there would be collossal PR and communication disadvantages given by naming a king, that would be mostly dodged by naming a princess. In particular, people would probably overinterpret king, but file princess under 'wacky'. This would not merely dodge, but could help against the 'cold and calculating' vibe some people get.
2Kindly11y
Luke_A_Somers is referring to Princess Dumbledore, from Harry Potter and the Methods of Rationality, chapter 86.
0Luke_A_Somers11y
I'd love to read that chapter!
2Zack_M_Davis11y
(Almost certainly a reference to the animated series My Little Pony: Friendship Is Magic, in which Princess Celestia rules the land of Equestria.)
0A1987dM11y
Let's just say BDFL (Benevolent Dictator For Life)...
2Luke_A_Somers11y
Insufficiently wacky - would invite accusations of authoritarianism/absolutism from the clue impaired.
4[anonymous]11y
"CEO" could work. I just like the word "king". a queen would do just as well.
3pleeppleep11y
Now you're just talking crazy.
0DanArmak11y
The queen's duty is to secure the royal succession!
9Eugine_Nier11y
The standard term is Benevolent Dictator for Life, and we already have one. What you're asking for strikes me as more of a governor-general.
[-][anonymous]11y410

Our benevolent dictator isn't doing much dictatoring. If I understand correctly that it's EY, he has a lot more hats to wear, and doesn't have the time to do LW-managing full time.

Is he willing to improve LW, but not able? Then he is not a dictator.
Is he able, but not willing? Then he is not benevolent.
Is he both willing and able? Then whence cometh suck?
Is he neither willing nor able? Then why call him God?

As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.

Okay, that? That was one of the most awesome predicates of which I've ever been a subject.

-3Epiphany11y
You're defending yourself against accusations of being a phyg leader over there and over here, you're enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is "always the same" of "building a new one (god-emperor)". Have you forgotten Luke's efforts to fight the perceptions of SI's arrogance? That you appear to be encouraging a comment that uses the word god to refer to you in any way, directly or indirectly, is pretty disheartening.
5Eliezer Yudkowsky11y
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, "IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL." Ceteris paribus, I'd prefer that working environment if offered.
0Epiphany11y
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You've endured the ravages of the hatchet job and Rationalwiki's annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet. What's more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you're brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking: How do I make the distinction between a guy who has an "arrogance problem" and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?

Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper ... except occasionally when you save the world from sin, death, and hell.

2Epiphany11y
I feel the same way as a web developer. One day I'm being told I'm a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I'm writing a script to compensate for the incompetent failures of a certain vendor. When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
2ChristianKl11y
The only way to decide whether someone has a serious issue is to read a bunch from them and then see which patterns you find.
2wedrifid11y
I don't see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn't taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image. No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer's power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don't perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward. For my part, however, I don't have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke's job harder. Just... as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning. The way I make such distinctions is to basically ignore 'superficial arrogance'. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others---particularly those others without the power to 'force' them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
0[anonymous]11y
srsly, brah. I think you misunderstood me. I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god. I bet he does endorse it. His life's work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
2Epiphany11y
My response was more about what interpretations are possible than what interpretation I took. Okay. There's a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares. Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
2[anonymous]11y
This is concerning. My intuitions suggest that it's not a big deal. I infer that you think it's a big deal. Someone is miscalibrated. Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant. I don't know if there's an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I've forgotten.
1Epiphany11y
I really like this nice, clear, direct observation. Yes, but more relevantly, humanity has a history with persecution - lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted. Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT? My take on the "build a God-like AI" idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don't have any sense that Jesus is going to come back and reconstruct us after it does it's optimization... I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer "It sounds like what you're saying is we need to build a God" and Eliezer is like "Why don't we call it a very powerful optimizing agent?" and grins like he's just fooled someone and Robert Wright thinks and he's like "Why don't we call that a euphemism for God?" which destroys Eliezer's grin. If Eliezer's intentions are to build a God, then he's far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn't that he makes himself look bad...
7wedrifid11y
Like he's just fooled someone? I see him talking like he's patiently humoring an ignorant child who is struggling to distinguish between "Any person who gives presents at Christmas time" and "The literal freaking Santa Claus, complete with magical flying reindeer". He isn't acting like he has 'fooled' anyone or acting in any way 'sneaky'. While I wouldn't have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to "Why don't we call that a euphemism for God?" is "Because that'd be wrong and totally muddled thinking". When your mission is to create an actual very powerful optimization agent and that---and not gods---is actually what you spend your time researching then a very powerful optimization agent isn't a 'euphemism' for anything. It's the actual core goal. Maybe, at a stretch, "God" can be used as a euphemism for "very powerful optimizing agent" but never the reverse. I'm not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let's even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between "failure to successfully fool people" and "failure to educate fools". It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
2Epiphany11y
I don't think that an AI that goes FOOM would be exactly the same as any of the "Gods" humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they're going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don't see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences. Robert Wright appears to be aware of this, as his specific wording was "It seems to me that in some sense what you're saying is that we need to build a God." If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said "in some sense" which means that his question is about something deeper than choosing a word. The wording he's using is asking something more like "Do you think we should build something similar to a God?" The way that I interpret this question is not "What do we call this thing?" but more "You think we should build a WHAT?" with the connotations of "What are you thinking?" because the salient thing is that building something even remotely similar to a God would be very, very dangerous. The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what's salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked "Are you about to commit murder?" I would NOT accept "Why don't we call it knife relocation?" as an acceptable answer. Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer's answer was an attempt to substitute n
3wedrifid11y
If forced to use that term and answer the question as you ask it, with a "Yes" or "No" then the correct answer would be "No". He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except "very powerful". If you insinst on that vocabulary you're going to get "No, I don't" as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright's lack of domain knowledge. There is no disingenuity here. Eliezer can honestly say "We should create a very powerful (and carefully designed) optimizing agent" but he cannot honestly say "We should create a God". (You may begin to understand some of the reasons why there is such a difference when you start considering questions like "Can it be controlled?". Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted ("Hell yes, I want to make a very powerful optimising agent!") rather than the answer the question you suggest would have given him ("Hell no! Don't create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren't able to comprehend yet!") I reject the analogy. Eliezer's answer isn't like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.) It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question---at least relative to Eliezer's standards---was moronic. Or at least had connotations of ignorance of salient features of the landscape. There may be a social skills related faux pas here---and it is one where it is usually socially appropriate to say wrong thin
2[anonymous]11y
Thank you. I will try to do more of that. Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there. I don't think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God. Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can't think of any way to make them care. I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions. This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
3Epiphany11y
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.) Okay, good point. I agree that religion is losing ground. However, I've witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I'm not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there's enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all. A lot is at stake here in the mind of the Christian. If it's not the end of the world, opposing a machine "God" is still going to look like a good idea - it's dangerous. If it is the end of the world, they'd better get their s--- in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don't, they're going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can't actually SEE the God in question. How people are responding to the mundane religious stuff shouldn't be seen as a sign of how they'll react when something exceptional happens. Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn't take any thought or effort. And fear will kill their minds, perhaps making rel
0A1987dM11y
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
0Nornagest11y
I'm not sure I've heard any detailed analysis of the Friendly AI project specifically in those terms -- at least not any that I felt was worth my time to read -- but it's a common trope of commentary on Singularitarianism in general. No less mainstream a work than Deus Ex, for example, quotes Voltaire's famous ""if God did not exist, it would be necessary to create him" in one of its endings -- which revolves around granting a friendly (but probably not Friendly) AI control over the world's computer networks.
0Jayson_Virissimo11y
ROT-13: Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
0bogus11y
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, "benevolent dictator" and the like; it has no implication at all.
0Epiphany11y
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted. Also, I am interpreting at that comment within the context of other things. The "arrogance problem" thread, the b - - - - - - k, Eliezer's dating profile, etc. What's not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
2mrglwrf11y
Why would you believe that something is always the solution when you already have evidence that it doesn't always work?
2[anonymous]11y
Let's go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won't work. In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
2mrglwrf11y
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn't invariably the answer. That was my only point.
-4Epiphany11y
I don't think a CEO level monarch is necessary though I don't know what job title a community "gardener" would map to. Do you think a female web developer who obviously cares a lot about LW and can implement solutions would be a good choice? This doesn't look like it's very likely to happen though, considering that they're changing focus: For 12 years we've largely focused on movement-building through the Singularity Summit, Less Wrong, and other programs... But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research. Then again maybe CFAR will want to do something.
1Curiouskid11y
I think you meant to use a different hyperlink?
2Epiphany11y
It has been fixed. Thanks, Curiouskid!
0[anonymous]11y
In general, the kinds of people that (strongly) hint that they should have power should...not...ever....have...power.
0[anonymous]11y
Female doesn't matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better. Whether or not we need someone with CEO-power is uninteresting. I think such a person having more power is good. If you're talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome. I've said before in the generic, but in this case we can be specific: If you declare yourself king, I'll kneel. (good luck)

If you're talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.

I'm opposed to appointing her as any sort of actual-power-having-person. Epiphany is a relative newcomer who makes a lot of missteps.

4[anonymous]11y
I agree that appointing her would be a bad idea. I see no problem with encouraging people (in this case, her) to become the kind of person we should appoint.
1wedrifid11y
The personal antipathy there has been distinctly evident to any onlookers who are mildly curious about how status and power tends to influence human behavior and thought.
2Alicorn11y
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative - I cannot make policy.)
0wedrifid11y
I agree and note that it is even more important that people with personal conflicts don't have the power (or, preferably, voluntarily waive the power) to actively take specific actions against their personal enemies. (Mind you, the parent also seems somewhat out of place in the context and very nearly comical given the actual history of power abuses on this site.)
0Epiphany11y
Well I do have the audacity. I would love to do that, but I've just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce. However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it. I don't believe in this. I am with Eliezer on sentiments like the following: In Two More Things to Unlearn from School he warns his readers that "It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism." In Cached Thoughts he tells you to question what HE says. "Now that you've read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you'll think, "Cached thoughts." My belief is now there in your mind, waiting to complete the pattern. But is it true? Don't let your mind complete the pattern! Think!" But thank you. (:
6[anonymous]11y
grumble grumble. Like I said, everyone who could is doing something else. Me too. I don't think they'll take the initiative on this. Maybe you approach them? I don't see how those relate. Thank you for giving a shit about LW, and trying to do something good. I see that you're actively engaging in the discussions in this thread and that's good. So thanks.
3Epiphany11y
Yeah. Well maybe a few of us will throw a few things at it and that'll keep it going... I mentioned a couple times that I'm dying to have online rationality training materials and that I want them badly enough I am half ready to run off and make them myself. I said something like "I'd consider doing this for free or giving you a good deal on freelance depending on project size". Nobody responded. Simply put: I'm not the type that wants obedience. I'm the type that wants people to think for themselves. Aww. I think that's the first time I've felt appreciated for addressing endless September. (: feels warm and fuzzy
0[anonymous]11y
Please allow me to change your mind. I am not the type who likes obedience either. I agree that thinking for selves is good, and that we should encourage as much of it as possible. However, this does not negate the usefulness of authority: Argument 1: Life is big. Bigger than the human mind can reasonable handle. I only have so much attention to distribute around. Say I'm a meetup participant. I could devote some attention to monitoring LW, the mailing list, etc until a meetup was posted, then overcome the activation energy to actually go. Or, the meetup organizer could mail me and say "Hi Nyan, come to Xday's meetup", then I just have to go. I don't have to spend as much attention on the second case, so I have more to spend on thinking-for-myself that matters, like figuring out whether the mainstream assumptions about glass are correct. So in that way, having someone to tell me what to think and do reduces the effort I have to spend on those things, and makes me more effective at the stuff I really care about. So I actually prefer it. Argument 2: Even if I had infinite capacity for thinking for myself and going my own way, sometimes it just isn't the right tool for the job. Thinking for myself doesn't let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy. So even if I'm a big fan of self-sufficiency and skepticism, I appreciate authority where available. Does this make sense? Perhaps we should continue this conversation somewhere more private... /sleaze PM me if you want to continue this thread.
0Epiphany11y
Well that is interesting and unexpected. This seems to be more of a matter of notification strategies - one where you have to check a "calendar" and one where the "calendar" comes to you. I am pattern-matching the concept "reminder" here. It seems to me that reminders, although important and possibly completely necessary for running a functional group, would be more along the lines of a behavioral detail as opposed to a fundamental leadership quality. I don't know why you're likening this to obedience. We do not have infinite capacity for critical thinking. True. I don't call trusting other people's opinions obedience. I call it trust. That is rare for me. Very rare for anything important. Next door to trust is what I do when I'm short on time or don't have the energy: I half-ass it. I grab someone's opinion, go "Meh, 70% chance they're right?" and slap it in. I don't call that obedience, either. I call it being overwhelmingly busy. Organizing trivial details is something I call organizing. I don't call it obedience. When I think of obedience I think of that damned nuisance demand that punishes me for being right. This is not because I am constantly right - I'm wrong often enough. I have observed, though, that some people are more interested in power than in wielding it meaningfully. They don't listen and use power as a way to avoid updating (leading them to be wrong frequently). They demand this thing "obedience" and that seems to be a warning that they are about act as if might makes right. My idea of leadership looks like this: * If you want something new to happen, do it first. When everyone else sees that you haven't been reduced to a pile of human rubble by the new experience, they'll decide the "guinea pig" has tested it well enough that they're willing to try it, too. * If you really want something to get done, do it your damn self. Don't wait around for someone else to do it, nag others, etc. * If you want others to behave, behave well first. A
2[anonymous]11y
We seem to have different connotations on "obedience", and might be talking about slightly different concepts. You're observations about how most people use power, and the bad kind of obedience, are spot-on. The topic came up because of the "I'd kneel to anyone who declared themselves king" thing. I don't think such a behaviour pattern has to go to bad power abusing obedience and submission. I think it's just a really strategically useful thing to support someone who is going to act as the group-agency. You seem to agree on the important stuff and we're just using different words. case closed? lol what? Either you or me has utterly misunderstood something because I'm utterly confused. I made a mock-sleazy joke about the goddam troll toll, and suggested that we wouldn't have to pay it but we could still discuss if we PMed instead. And then suddenly this romantic thing. OhgodwhathaveIdone. That's good. :)
2Epiphany11y
Yeah I think the main difference may be that I am very wary of power abuse, so I avoid using terms like "obedience" and "kneeling" and "king" and choose other terms that imply a situation where power is balanced. Sorry, I think I must have misread that. I've been having problems sleeping lately. If you want to talk in PM to avoid the troll toll go ahead. Well not anymore. laughs at self

I'm disappointed in some of you. Am I the only person who prefers feeling elitist and hipstery to spreading rationality?

In all seriousness, though, I don't see why this is getting down voted. Eternal September probably isn't our biggest issue, but the massive increase in users is likely to cause problems, and those problems should be addressed. I personally don't like the idea of answering the horde of newbies with restrictions based on seniority or karma. That's not really fair and can select for poster who have used up their best ideas while shutting out new viewpoints. I much prefer the calls for restrictions based on merit and understanding, like the rationality quiz proposed below, or attempts to enlighten new users or even older users who have forgotten some of the better memes here. I also like the idea of a moderator of some kind, but my anti-authoritarian tendencies make me wary of allotting that person too much power as they are assuredly biased and will have a severely limited ability to control all the content here, which will generate unfairness and turn some people off.

I doubt that endless September is the main problem here, but I think it's pretty clear that this sit... (read more)

-3Epiphany11y
What would that be in your opinion? Thank you pleeppleep for bringing this up. I am especially curious about why this thread has been hovering between -3 and 1 karma when the majority of people are concerned about this, and have chosen a solution for at least one problem. If you get any theories, please let me know. People have theorized that the users who might post these discussions are too intimidated to post. Do you have additional theories, or do you think this is the problem, too? It is one of the best things that's happened to me, too. I feel strongly about protecting it. How would you describe the problem? What would you suggest for ways to assess it? What do you mean by that exactly? (No, I will not bite your head off about elitism. I have strong feelings about specific the type of elitism that means abusing others with the excuse that one is "better than" them, but I am curious to hear about any and all other varieties.) Wow. That's inspirational. Okay, I think I know what "elitist optimism" means now. I don't agree with the goal of building gods (an awesome idea but super dangerous), but I want to quote this in places. I will need to find places to quote it in. Upvote. (:
7pleeppleep11y
I'd say our biggest issue lately is lack of direction. The classic topics are getting kinda old now, and we don't really seem to be able to commit to any replacements. Anything in the sequences is pretty firmly established so nobody talks much about them anymore, and without them we kinda drift to things like the "rational" way to brush your teeth. If the site starts to feel watered down, I don't think it's because of new users, but because of shallow topics. Endless September is probably the biggest issue drawing us towards the mainstream. I'm not really sure what the cause for this is, but I'd say that the above theory or general apathy on the part of some of the better contributors are the most likely. Like I said before, the site's starting to feel watered down. It seems like the fire that drew us here is beginning to die down. It's probably just an effect of time letting the ideas settle in, but I still think we should be able to counter the problem if we're all we're cracked up to be. I think it's really good that Eliezer is writing a new sequence, but I don't think he can support the community's ambition all on his own anymore. We need something new. Something that gets us at least as excited as the old sequences. Something that gets us back in the mood to take on the universe, blind idiot god and all. I think that a lot of us just sort of settled back into our mundane lives once the high from thinking about conquering the stars wore off. I think we should find a way to feel as strong as we did once we realized how much of man's mind is malfunctioning and how powerful we would become if we could get past that. I really don't know if we can recapture that spirit, but if it's possible, then it shouldn't be harder than figuring out FAI.
  1. Raise the karma threshold for various actions.
  2. Split up the various SEQ RERUN, META, MEETUP into "sub-LWs" so that those who are not interested do not need to see it.
  3. Likewise, split up topics: applied rationality, FAI, and perhaps a few others. There can still be an overview page for those who want to see everything.
  4. Perhaps this is offtopic, but add an email-notification mechanism for the inbox. This would reduce the need to keep coming back to look for responses, and so reduce the annoyance level.
9devas11y
I agree strongly with # 2,3 and 4 Particularly 2, since the absence of category divisions makes all discussion harder to browse....at least for me

As was noted previously: the community is probably doomed. I suspect all communities are - they have a life cycle.

Even if a community has the same content, the people within it change with time.

The essential work on the subject is Clay Shirky's A Group Is Its Worst Enemy. It says at some point it's time for a wizard smackdown, but it's not clear this helps - it also goes from "let's see what happens" development to a retconned fundamentalism, where the understood implicit constitution is enforced. This can lead to problems if people have different ideas on what the understood implicit constitution actually was.

I also think this stuff is constant because Mark Dery's Flame Wars discussed the social structure of online groups (Usenet, BBSes) in detail in 1994, and described the Internet pretty much as it is now and has been since the 1980s.

tl;dr people are a problem.

"LessWrong has lost 52% of it's giftedness since March of 2009" is an incredibly sensationalist way of describing a mere 7-point average IQ drop. Especially if the average is dropping due to new users, because then the "giftedness" isn't actually being lost.

8gwern11y
Some absolute figures: R> lw2009 <- read.csv("2009.csv"); lw2011 <- read.csv("2011.csv"); lw2012 <- read.csv("2012.csv") R> R> sum(as.integer(as.character(lw2009$IQ)) > 140, na.rm=TRUE) [1] 31 R> sum(as.integer(as.character(lw2011$IQ)) > 140, na.rm=TRUE) [1] 131 R> sum(as.integer(as.character(lw2012$IQ)) > 140, na.rm=TRUE) [1] 120 R> R> sum(as.integer(as.character(lw2009$IQ)) > 150, na.rm=TRUE) [1] 20 R> sum(as.integer(as.character(lw2011$IQ)) > 150, na.rm=TRUE) [1] 53 R> sum(as.integer(as.character(lw2012$IQ)) > 150, na.rm=TRUE) [1] 42
7Nominull11y
Well, I agree, but "mere" probably isn't a sensationalist enough way to describe a 7 point drop in IQ.
4Kindly11y
Okay, I agree, maybe that was pushing it a little.
2A1987dM11y
Honestly, so long as the drop is due to lower-IQ people arriving rather than higher-IQ people leaving, I can't see why it's such a big deal -- especially if the “new” people mostly just lurk. Now, if the average IQ of only the people with > 100 karma in the last 30 days was also dropping with time...
-8Epiphany11y
[-][anonymous]11y70

I think we could use more intellectual productivity. I think we already have the capacity for a lot more. I think that would do a lot against any problems we might have, Obviously I am aware of the futility of the vague "we" in this paragraph, so I'll talk about what I could do but don't.

I have a lot of ideas to write up. I want to write something on "The improper use of empathy", something about "leading and following", something about social awkwardness from the inside. I wrote an article about fermi estimation that I've nev... (read more)

7Epiphany11y
I don't feel inadequate but I do feel likely to get jumped all over for mistakes. I've realized that you really need to go over things with a fine-toothed comb, and that there are countless cultural peculiarities that are, for me, unexpected. I've decided that the way I will feel comfortable posting here is to carefully word my point, make sure that point is obvious to the reader, identify and mentally outline any other claims in the piece, and make sure every part is supported and then (until I get to know the culture better) ask someone to check it out for spots that will be misunderstood. That has resulted in me doing a lot of research. So now my main bottleneck is that I feel like posting something requires doing a lot of research. This is well and good IMO, but it means I won't post anywhere near as much simply because it takes a lot of time. I've wondered if it would do us good to form a writer's group within LW where people can find out what topics everyone else is interested in writing about (which would allow them to co-author, cutting the work in half), see whether there are volunteers to do research for posts, and get a "second pair of eyes" to detect any karma-destroying mistakes in the writings before they're posted. A group like this would probably result in more writing.
4[anonymous]11y
That's a really good idea. Let me know when you've organized something.
0Epiphany11y
(: I do not have time to organize this currently. I'm not even sure I will have time to post on LessWrong. I have a lot of irons on the fire. :/ I would sure love to run a LW writer's group though, that would be awesome. Inevitably, it would be pointed out that I am not an expert on LW culture. If things slow down, and I do not see anyone else doing this, I may go for it anyway.
[-][anonymous]11y110

(:

I can no longer hold my tongue. Your smileys are upside-down, and the tiny moments of empathetic sadness when my eyes haven't sorted out which side of the parens the colon is on are really starting to add up. :)

1Epiphany11y
Rofl. I am not sure if this is supposed to get me to stop, or get me to laugh.
1[anonymous]11y
Even in the same comment, you don't orient your smileys the same way. Just saying...
1Armok_GoB11y
I have like 10 different articles I'd like to submit to this, many of which have been on ice for literally years!
4Epiphany11y
What are your reasons for postponing? More interestingly, what would get you to post them? Would the writer's group as described above do it, or this other suggestion here? Would something else help?
2Armok_GoB11y
Being absolutely, utterly terrible at writing. Being utterly incapable of clear communication. Being a sloppy thinker incapable of formalizing and testing all the awesome theories I come up with. Being rather shy and caring very very much about the opinions of this community, and very insecure in my own abilities, fearing ridicule and downvotes. Other than that I am extremely motivated to share all these insights I think might be extremely valuable to the world, somehow. The suggestion mentioned wouldn't help at all. Really, anything radial enough will look less like fixing something I've written, and more like me explaining the idea and someone else writing an article about it with me pointing out miscommunications.
7John_Maxwell11y
Yep, that's my experience as well. Recently, I decided "screw what LW thinks" and started posting more thoughts of mine, and they're all getting upvoted. My vague intuitions about how many upvotes my posts will get doesn't seem to correlate very well with how many upvotes they actually get either. This is probably true for other people as well. The only potential problem with this, IMO, is if people think I'm more of an authoritative source than I actually am. I'm just sharing random thoughts I have; I don't do scholarly work like gwern.
1Epiphany11y
It seems to me that there are lots and lots of people who want to write posts but they're concerned about whether those posts will be received well. I've read, also, that more people put "public speaking" as their worst fear than "death" when surveyed. If we made a karma prediction tool, maybe that would help get people posting here. Here's what I'm thinking: First, we could create a checklist of the traits that we think will get a LessWrong post upvoted. For instance: * Is there an obvious main point or constructive goal? * Is the main point supported / is there a reasonable plan for the constructive goal? (Or are they otherwise framed in the correct context "This is hypothetical" or whatever.) * What type of support is included (math, citations, graphics, etc). * Was the topic already covered? * Is it a topic of interest to LessWrong? * Is it uplifting or unhappy? * (Or do a separate survey that asks people's reasons for upvoting / downvoting and populate the checklist with those.) Then we could post the checklist as a poll in each new post and article for a while. Then we could correlate the karma data with the checklist poll data and test it to see how accurately it predicts a post's karma. If you had a karma prediction tool, would it help you post more? [pollid:413]
0satt11y
Posting that checklist as a poll in each new post would likely end up irritating people. A simpler approach, with the twin advantages of being simpler and being something one can do unilaterally, would be to just count the proportion of recent, non-meetup-related Discussion posts with positive karma. Then you could give potential post authors an encouraging reference class forecast like "85% of non-meetup Discussion posts get positive karma".
2Epiphany11y
You know what? That is simple and elegant. I like that about it... but in the worst case scenario, that will encourage people to post stuff without thinking about it because they'll make the hasty generalization that "All non-meetup posts have an 85% chance of getting some karma" and even in the best case scenario, a lot of people will probably be thinking something along the lines of "Just because Yvain and Gwern and people who are really good at this get positive karma doesn't mean that I will." Unfortunately, I think it would be ineffective.
2satt11y
Fair points.

It has occurred to me that LessWrong is divided against itself with two conflicting directives:

  1. Spread rationality.
  2. Be a well-kept garden.

Spreading rationality implies helping as many new people as possible develop improved rational thinking abilities but being a well-kept garden specifically demands censorship and/or bans of "fools" and people who are not "fun".

"A house divided against itself cannot stand." (Lincoln)

I think this fundamental conflict must be solved in some way. If not, then the risk is that LessWrong's di... (read more)

4Richard_Kennaway11y
Every school has this problem: how to welcome people who as yet know little and raise them up to the standard we want them to reach, while allowing those already there to develop further. Universities solve this with a caste distinction between the former (students) and the latter (faculty), plus a few bridging roles (grad student, intern, etc.). On a much smaller scale, the taiko group I play with has found the same problem of dividing beginners from the performing team. It doesn't work to have one class that combines introductory practice with performance rehearsal. And there can be social problems of people who simply aren't going to improve getting disgruntled at never being invited to join the performing team. In another comment I suggested that this division already exists: LessWrong and CFAR. So the question is, does LessWrong itself need a further splitting between welcoming beginners and a "serious" inner circle? Who would be the advanced people who would tend the beginners garden? How would membership in the inner circle be decided?
1[anonymous]11y
Missions, perhaps? A few ideas: "We are rationalists, ask us anything" as an occasional post on reddit. Drop links and insightful comments around the internet where interesting people hang out. Effect #1 is to raise the profile of rationality in the internet community in general, so that more people become interested. Effect #2 is that smart people click on our links and come to LW. I myself was linked to LW at first by a random link dropped in r/transhumanism or something. I immediately recognized the awesomeness of LW, and ate the sequences. On the home front, I think we should go whole hog on being a well kept garden. Here's why: 1. There's no such thing as a crowd of philosophers. A movement should stay small and high quality as long as possible. The only way to maintain quality is to select for quality. 2. There are a lot of people out there, such that we could select for any combination of traits we liked and be unlikely to run out of noobs. We will have a much easier time at integration and community maintenance if we focused on only attracting the right folks. I don't think we have to worry about creating rationalists from normals. There are enough smart proto-rationalists out there just itching to find something like LW, that all we have to do is find them, demonstrate our powers, and point them here. We should focus on collecting rationalists, not creating them. (Is there anyone for whom this wouldn't have worked? Worse, is there any major subset of good possible LWers that this turns off?) As for integrating new people, I think the right people will find a way and it's ok if everyone else gets turned off. This might be pure wishful thinking. What are other people's thoughts on this? Overall, have the low level missionary work happen out there where it belongs. Not in these hallowed halls. As for what to do with these hallowed halls, here's my recommendations: 1. Elect or otherwise create an Official Community Organizer who's job it is to integ

Without some measure of who the respondants are, this survey can't mean much. If the recent arrivals vote on mass that there is no problem, the poll will suggest there isn't any, even though Eternal September is the very mechanism that causes the poll outcome! For the same reason that sufficiently large immigration becomes politically impossible to reverse, so too Eternal September cannot be combatted democractically.

To get a more accurate response, we'd have to restrict it to people who had more than 100 karma 12 months ago or something.

-2Epiphany11y
Well that's not what's happened. Most of the votes are in, and the majority has voted that they're very or somewhat concerned. Any other concerns?
0Larks11y
It might be still be more of a problem than the poll suggests. Maybe all the old-timers voted very concerned, and are being diluted by the newcomers. (To clarify, I appreciate that you've done this. I just think it's important to bear in mind that things are probably even worse than they look)
0Epiphany11y
Do you mean because of normalcy bias / optimism bias? I am concerned about that, too. But in reality, I don't think there's an accurate way to measure the endless September threat. I doubt anyone has done the sort of research that would produce reliable indicators (like following numerous forums, watching for certain signs, determining which traits do have predictive power, testing ideas, etc.). My POV is basically that if there's a group, and it becomes popular, it will eventually trend toward the mainstream far enough for me personally to be unhappy about it (I have always been very different but were I a mainstream person, I'd probably be cheering for endless September, and were I less different, I would be less concerned about it because my threshold for how much trending I would deem a problem would be higher, so it does seem relevant to acknowledge that my perspective on what would constitute ES is relative to me, as I am easy to alienate and so have a low tolerance for inundation.) If you are hoping to make me 'very' concerned, you're preaching to the converted though perhaps you were more interested in convincing LW.

So, what's needed is a division into an introductory place for anyone to join and learn, and a "graduate-level" place for people with serious ability and commitment to making stuff happen. The latter wouldn't be a public forum, in fact it wouldn't even be a forum at all, even if as part of its activities it has one. It would be an organisation founded for the purpose of promulgating rationality and improving that of its members, not merely talking about it. It would present itself to the world as such, with participation by invitation or by application rather than just by signing in on a web site.

In other words, LessWrong and CFAR.

We might want to consider methods of raising standards for community members via barriers of entry employed elsewhere (Either for posting, getting at some or all the content, or even hearing about the site's existance):

  • An application process for entry (Workplaces (ie Valve), MUD sites)
  • Regulating influx using a member cap (Torrent sites, betas of web products)
  • An activity standard - You have to be atleast this active to maintain membership (Torrent sites, task groups in organizations sometimes)
  • A membership fee - Maybe in conjuction with an activity stan
... (read more)
8Eugine_Nier11y
On the other had, lurking for a while before posting is very much what we want new users to do.

This post reminds me of Eliezer's own complaints against Objectivism; that Ayn Rand's ingroup became increasingly selective as time went on, developing a self-reinforcing fundamentalism.

As I wrote in one of my blogs a while back, discussing another community that rejects newcomers:

"This is a part of every community. A community which cannot or will not do this is crippled and doomed, which is to say, it -is- their jobs to [teach new members their mores]. This is part of humanity; we keep dying and getting replaced, and training our replacements is a... (read more)

9Nick_Tarleton11y
An elite intellectual community can^H^H^H has to mostly reject newcomers, but those it does accept it has to invest in very effectively (while avoiding the Objectivist failure mode). I think part of the problem is that LW has elements of both a ground for elite intellectual discussion and a ground for a movement, and these goals seem hard or impossible to serve with the same forum. I agree that laziness and expecting people to "just know" is also part of the problem. Upvoted for the quote.
1katydee11y
I'm not entirely sure that expecting people to "just know" is a huge problem here, as on the Internet appropriate behavior can be inferred relatively easily by reading past posts and comments-- hence the common instruction to "lurk more." One could construe this as a filter, but if so, who is it excluding? People with low situational awareness?
-2MugaSofer11y
The obvious analogy of childood here would be lurking, which does not require any special effort to "teach them that, whether they're twelve, twenty two, or eighty two"
[-][anonymous]11y20

I really don't see why Epiphany is so obsessed with IQ. Based on anecdotal evidence, there is not much of a correlation between IQ and intellect beyond the first two standard deviations above the mean anyway. I have come across more than a handful of people who don't excel in traditional IQ tests, but who are nevertheless very capable of presenting coherent, well-argued insights. Does it matter to me that their IQ is 132 instead of 139? No. Who cares about the average IQ among members of the LW community as long as we continue demonstrating the ability to ... (read more)

0Epiphany11y
Another possibility here is that your perceptions of intelligence levels are really off. This isn't too unlikely as I see it: I've heard reports that people with super high IQs have trouble making distinctions between normal and bright, or even between moderately gifted and mentally challenged. I frequently observe that the gifted people I've met experience their own intelligence level as normal, and accidentally mistake normal people for stupid ones, or mistakenly interpret malice when only ignorance is present (because they're assuming the other person is as smart as they are and would therefore never make such an ignorant mistake). If the intelligence difference you experience every day is 70 points wide, your perceptions are probably more geared to find some way to make sense of conflicting information, not geared to be sensitive to ten point differences. As a person who has spent a lot of time learning about intelligence differences, I'd say it's fairly hard to perceive intelligence differences smaller than 15 points anyway. The 30 point differences are fairly easy to spot. A large part of this may be because of the wide gaps in abilities that gifted people tend to have between their different areas of intelligence. So, you've got to figure that IQ 130 might be an average of four abilities that are quite different from each other, and so the person's abilities will likely overlap with some of the abilities of a person with IQ 120 or IQ 140. However, a person with an IQ of 160 will most likely have their abilities spread out across a higher up range of ability levels, so they're more likely to seem to have completely different abilities from people who have IQs around 130. The reason why a few points of difference is important in this context is because the loss appears to be continuing. If we lose a few points each year, then over time, LessWrong would trend toward the mean and the culture here may die as a result.
1A1987dM11y
http://xkcd.com/605/ http://xkcd.com/1007/ (SCNR.)
-2Epiphany11y
Ok, FYI, if you see the words "appears to be" and "if" in my sentences, it means I am acknowledging the ambiguity. If you do not want to annoy me, please wait until I'm using words like "definitely" and "when" or direct your "could not resist" comments at someone else. If you want to discuss how we may determine the probability of a consistent and continuing downward trend, that would be constructive and I'd be very interested. Please do not waste my time by pointing out the obvious.
0A1987dM11y
(First of all, as I might have already mentioned, I don't think that the average of (IQ - 132) over all readers is a terribly interesting metric; the total number of active contributors with IQ above 132 or something like that might be better.) I'd guess that the decline in average IQ is mostly due to lower-IQ people arriving rather than to higher-IQ people leaving (EDIT: applying the intraocular trauma test to this graph appears to confirm that), and the population growth appears to have tapered off (there were fewer respondents in the 2012 survey than in the 2011 one, even though the 2011 one was open for longer). I'd guess the average IQ of readers is decreasing with time as a reversed logistic function, but we'd have to fit a four-parameter curve to three data points to test that.
0Epiphany11y
Actually, a similar concern was brought up in response to my IQ Accuracy comment and Vaniver discovered that the average IQs of the active members and lurkers was almost exactly the same: We could separate the lurkers from the active members and do the analysis again, but I'm not sure it would be worth the effort as it looks to me like active members and lurkers are giving similar answers. If you'd like to do that, I'd certainly be interested in any surprises you uncover, but I don't expect it to be worthwhile enough to do it myself. The sample set for the highest IQ groups is, of course, rather small, but what's been happening with the highest IQ groups is not encouraging. The specific graph in question (although I very much doubt that Gwern would intend to make that graph misleading in any way) is just not designed to clearly illustrate that particular aspect of the results visually. Here are a few things you wouldn't guess without looking at the numbers: Exceptionally gifted people used to be 18% of the IQ respondents. Now they are 6%. The total number of highly and exceptionally gifted respondents decreased in 2012, while normal and moderately gifted respondents increased. I did some analysis here
1A1987dM11y
I'm under the impression that a substantial part of Hanson's Homo hypocritus observations fall prey to this failure mode.
2Epiphany11y
Is there a name for this failure mode? For clarity: The one where people use themselves as a map of other people and are frequently incorrect. That would be good to have.
5Vladimir_Nesov11y
http://wiki.lesswrong.com/wiki/Typical_mind_fallacy
0[anonymous]11y
Sorry about my tardiness when responding to comments. I don't visit LessWrong very often. Maybe in future I should refrain from posting comments unless I am sure that I have the time and diligence to participate satisfactorily in any discussion that my comments might generate, since I wouldn't want to come across as rude. After reading and thinking a bit about this comment, I think you might be right, especially regarding the point that gifted people might often I am rather bad at reading other people. I am not diagnosed with any degree of autism, but I am rather socially stunted nevertheless. As I mentioned in an earlier comment, I can be socially inept. This self-assessment was the conclusion of many instances where I was informed that I had grossly misunderstood certain social situations or inadvertently committed some kind of faux pas. It is also generally difficult for me to gauge whether specific comments of mine might be construed as passive-aggressive/condescending. When you asked if my intention was to insult you, my response was "No, but I am sorry that you feel that way". In the past, when I did not know any better, I would have said, "No, and don't be so sensitive." As you can imagine, that response usually escalated things instead of calming people down. It is a long and ongoing learning process for me to understand how to react appropriately in social contexts in order to avoid hurt feelings. In short, it seems like I commit the mind projection fallacy a lot when interacting with other people: If I wouldn't feel offended by certain ways of phrasing things, I assume that other people wouldn't either. If I wouldn't make such an ignorant mistake, I assume that other people wouldn't either. When you put it like this, I can understand your concern.
0Epiphany11y
Try reading this response to Slade's suicidal post and you will begin to understand why giftedness is relevant, in a general sense. Gifted people, especially highly gifted people, are very different from most. If you haven't seen that for yourself, then perhaps: A. You haven't met someone with an IQ like 160 or 180. Those people tend to be very, very different so maybe you are only comparing people with much smaller IQ differences with each other. B. The people you've met with super high IQs behave in a way that blends in when they're with you and minimize social contact so that you don't notice the differences. The ones that I know tend to do that. They don't just barge into a room and solve unsolvable science problems for all to see. They tend to be quiet, or away hiding in their caves. C. You never asked the IQs of the smartest people you know and therefore haven't seen the difference. D. You feel strongly that we should express egalitarianism by treating everyone as if they are all intellectually exactly the same. There's a movement of people who want to believe everyone is gifted, that giftedness does not exist, that it goes away, or that gifted people have some horrible flaw that "balances" them out, that they should be stifled in schooling environments in order to destroy their giftedness so that they're intellectually equal to everybody else, and all kinds of other things. Many people hate inequality and cannot deal with the scientifically proven fact that intellectual inequalities do exist. Wanting to solve inequalities is great, but it's important that we don't deny that intellectual inequalities exist, and it's absolutely, undeniably wrong to stifle a person, especially a child, in the name of "equality". I care a lot about this cause. I hope you read this PDF by developmental psychologist Linda Silverman (I want everyone to read it): Myths about the Gifted One in six gifted people has a learning disorder. About one in three are creative. Some of th
5[anonymous]11y
To the extent that IQ tests are reliable, my IQ is actually measured to be 170 (no re-takes or prior training; assessed by a psychometrician). (Just supplying information here; please don't construe this as an act of defensiveness or showing off, because that is not my intention.) I was also not only comparing people with smaller IQ differences -- I have encountered people with 10+ points of IQ difference and yet who are not significantly different in terms of their abilities to contribute meaningfully to dialogues. But, of course, my sample size is not huge. No, but I am sorry that you feel that way. I can be socially inept.
0Epiphany11y
Well that was unexpected. I'm open-minded enough to consider that this is possibly the case. FYI: Claims like this are likely to trigger a fit of "overconfident pessimism" (referring to Luke's article) in some of the members. IQ appears to be a consistent pessimism trigger. Admitting that is big of you. Thanks for that. My subjective faith in humanity indicator has been incremented a tick in the upward direction. I see you're new, so I'll inform you: There are a lot of people like us here, meaning, people who know better than to game an IQ test and then delude themselves with the "results". I won't say there are no status games, but that you will find a lot of people that frown on them as much as you appear to in your last comment. I don't even believe in status. It's really hard to leave the outside world outside. I keep perceiving irrational B.S. everywhere, even though I've been participating here since August. Not going to say that there's no irrational B.S. here or that I haven't adjusted at all but that my perceptions still haven't entirely adjusted. It appears that you may have a similar issue of perceiving B.S. in comments where no such B.S. exists. It's best to be aware of such a tendency if you have it, as this kind of response is, for obvious reasons, kind of alienating to others. Not blaming you for it (I have the same problem). Just trying to help. Now that we've established that there was a misunderstanding here, would you like to start over by choosing and clarifying a point you want to make, or telling me that you've reinterpreted things? That would tie up this loose end of a conversation. Out of curiosity, do you feel significantly different from those in the IQ 130 range?
1Vladimir_Nesov11y
This sounds like identity-driven reasoning. (Antipattern: "Do I accept the claim X? I'm open-minded. Open-minded people would accept X. Therefore I accept X.") The conclusions you draw about something should be given by your understanding of that thing, not by your identity.
3A1987dM11y
Isn't creativity a continuum? Such a sentence sounds as weird as “about one in three is tall” to me.
-3Epiphany11y
You have written me several comments today. One that was fairly constructive, one that was admittedly a "sorry could not resist" and now this. This comment makes me feel nit-picked at.
2A1987dM11y
I started implementing this policy, and while I'm there I sometimes also glance at aunts/cousins of the comment I'm considering replying to.

How many believe that the current culture of LW should deviate exactly as much or more than it currently does from the culture of people who are likely to join (and therefore influence)?

I think that LW would be better with more good content. (Shocking!) I like plans that improve the amount of visible good content on LW, and am generally ambivalent towards plans that don't have that as an explicit goal.

My preferred explanation for why LW is less fun than it was before: Curiosity seeks to annihilate itself, and the low-hanging fruits have been picked. At one point, I was curious about what diet I should follow; I discovered intermittent fasting, tried it out, and it worked well for me. I am now far less curious about what diet I should foll... (read more)

3[anonymous]11y
Good points, but a bucket of picked fruit does not make a pie. We've generated a lot of really valuable insight on this site, but right now it has no structure to it. Maybe it's time to move from an article-writing phase to a knowledge-organization phase.
2Epiphany11y
I had been thinking that, too. Some people had mentioned argument mapping software, however I have heard some really harsh criticisms of those. Not sure if that's the right way. Maybe a karma-infused wiki (alluding to Luke's recent post).
2[anonymous]11y
A textbook style overview/survey of LW rationality would be pretty awesome.

The bad news is that LessWrong's IQ average has decreased on each survey. It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong has lost 52% of it's giftedness since March of 2009.

What? The inflated self-estimates have dramatically declined towards more likely numbers. Shouldn't we be celebrating a decrease in bias?

Edit: My analysis of the public survey data; in particular, the number of responders is a huge part of the estimate. If you assume every non-responder has, on average, an IQ of 1... (read more)

2Epiphany11y
This might be virtuous doubt. Have you considered the opposite? See Also: Luke's article on overconfident pessimism. My IQ related links in the OP.
2Vaniver11y
Yes, and I'm familiar with your IQ-related links in the OP*. But what's the opposite here? Let me make sure my position is clear: I agree that the people who post on LW are noticeably cleverer than the people that post elsewhere on the internet. The narrow claim that I'm making is that the average self-reported IQ is almost definitely an overestimate of the real average IQ of people who post on LW, and a large change towards the likely true value in an unreliable number should not be cause for alarm. The primary three pieces of evidence I submit are: 1. On this survey, around a third of people self-reported their IQ, and it's reasonable to expect that there is a systematic bias, such that people with higher perceived IQs are more likely to share them. I haven't checked how many people self-reported on previous surveys, but it's probably similarly low. 2. When you use modern conversion numbers for average SAT scores, you get a reasonable 97th percentile for the average LWer. Yvain's estimate used a conversion chart from two decades ago; in case you aren't familiar with the history of psychometric testing, that's when the SAT had its right tail chopped off to make the racial gap in scores less obvious. 3. The correlation between the Raven's test and the self-reported IQ scores is dismal, especially the negative correlation for people without positive LW karma. The Raven's test is not designed to differentiate well between people who are more than 99th percentile (IQ 135), but the mean score of 127 (for users with positive karma) was 96th percentile, so I don't think that's as serious a concern. * I rechecked the comment you linked to in the OP, and I think it was expanded since I read it first. I agree that more than half of people provided at least one IQ estimate, but I think that they should not be weighted uniformly; for example, using the self-reported IQ to validate the self-reported IQ seems like a bad idea! It might be interesting to see how SAT scores
2Epiphany11y
What IQ would you correlate to the SAT numbers, considering? As for the Raven's numbers, I am not sure where you're getting them from. I don't see a column when searching for "raven" in the 2012 spreadsheet, nor do I see "raven" on the survey result threads.
2Vaniver11y
SAT scores, combined with the year that it was taken in, give you a percentile measure of that person (compared to test-takers, which is different from the general population, but in a fairly predictable way), which you can then turn into an IQ-equivalent. I say equivalent because there are a number of issues. First, intelligence testing has a perennial problem that absolute intelligence and relative intelligence are different things. Someone who is 95th percentile compared to high school students in 1962 is not the same as someone who is 95th percentile compared to high school students in 2012. It might also be more meaningful to say something like "the median LWer can store 10 numbers in working memory, compared to the general population's median of 7" instead of "the median LWer has a working memory that's 95th percentile." (I also haven't looked up recently how g-loaded the SAT is, and that could vary significantly over time.) Second, one of the main benefits may not be that the median LWer is able to get into MENSA, but that the smartest LWers are cleverer than most people have had the chance to meet during the lives. This is something that IQ tests are not very good at measuring, especially if you try to maintain the normal distribution. Reliably telling the difference between someone who is 1 out of 1,000 (146) and someone who is 1 out of 10,000 (155) is too difficult for most current tests; how many people would you have to base your test off of to reliably tell that someone is one out of a million (171) from their raw score? iqtest.dk is based on Raven's Progressive Matrices; the corresponding column, CV in the public .xls, is called IQTest. I referred to the scores that way because Raven's measures a particular variety of intelligence. It's seen widespread adoption because it's highly g-loaded and culture fair, but a score on Raven's is subtly different from a total score on WAIS, for example.
0Kindly11y
It's the "IQTest" column, corresponding to scores from iqtest.dk.

I'm curious, how do you propose spreading ideas or raising the sanity waterline without bringing in new people?

If you want to spread ideas, don't bring outsiders in, send missionaries out.

3[anonymous]11y
Missionaries will nearly inevitably mention LessWrong, which will still attract people to the site (even if they don't stay around for long.)
0Larks11y
People can have read-only access.
3Eugine_Nier11y
On the other hand if the new people dilute or overwhelm LW culture and lower the sanity waterline on LW, it won't be able to raise the sanity waterline in the rest of the world. It's a balancing act.
1Epiphany11y
Firstly, it is not my view that we should not bring in new people. My view is that if we bring in too many new people at once, it will be intolerable for the old users and they will leave. That won't raise the sanity waterline as effectively as growing the site at a stable pace. Secondly, the poll has an option "Send beginners to the Center for Applied Rationality" (spelled "Modern" not "Applied" in the poll because I was unaware that CFMR changed it's name to CFAR).

It's almost like as we grow in number we regress towards the mean. Shocking.

6jmmcd11y
If trending towards the mean wasn't explicitly mentioned in the poll this would be a useful contribution. As it stands, you should pay a lol toll.
0RobertLumley11y
My point is that as LW was founded largely as a way of drumming up support for SIAI. The fact that they want to and are making efforts to grow LW in number should make it utterly unsurprising that we are regressing to the mean.
-2Epiphany11y
Then what do you make of this: http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/ Yes, when Eliezer made this place, he was frustrated that people didn't seem, to him, to understand the deal with AI / existential risk, but he also really cares about this place as a garden of it's own and I don't think he wants a large quantity of users as much as he wants quality thinking going on. Nor does he want "support for SIAI" - he does not want undiscriminating skeptics or followers like Ayn Rand had who did what she did without thinking for themselves about it. Considering those two constraints, I figure LW is not largely about getting support for SIAI, but largely about raising the sanity waterline. He believes that if the sanity waterline raises, people will be able to see the value of his ideas. He acknowledges, also, that you can't capture all the value you create, and I believe he said he expected most of the benefits from this site to be applied elsewhere, not necessarily benefiting SIAI.
4RobertLumley11y
Official publications from the SI have said that LW was specifically about community building. This is very well established.
1Epiphany11y
Not sure why you're equating "community building" with "supporting SIAI". I'm sure they would not have started LW if they didn't think it would help with SIAI goals (they're probably too busy / too focused for that) but to me "I would not have started this without it needing to serve x purpose" does not mean "this is going to serve several purposes" has no value. It may be that they would not have started it without it serving y and z purposes as well (where y and z may be raising the sanity waterline, encouraging effective altruism and things like that.)
0[anonymous]11y
[-][anonymous]11y10

Option 1: Close the borders. It's unfortunate that the best sort might be kept out, while its guaranteed the rest will be kept out. The best can found / join other sites, and LW can establish immigration policies after a while.

Option 2. Builds. Freeze LW at a stage of development, then have a new build later. Call this one LW 2012, and nobody can join for six months, and we're intent on topics X Y and Z. Then for build 2013 there are some vacancies (based on karma?) for a period of time, and we're intent on topics X Q and R.

Option 3: Expiration date. No... (read more)

In my experience, which admittedly comes from sites quite different from LW, an Internet project running on volunteer contributions that's decided to keep new members from productive roles has a useful lifetime of no more than one to two years. That's about how long it takes for everybody to voice their pet issues, settle their feuds, and move on with their lives; people may linger for years afterwards, but at that point the vital phase is over. This can be stretched somewhat if there's deep factional divisions within the founding population -- spite is a pretty good motivator -- but only at the cost of making the user experience a lot more political.

It's also worth bearing in mind that demand for membership in a forum like this one is continuous and fairly short-term. Accounts offer few easily quantifiable benefits to begin with, very few if the site's readable to non-members, so we can't rely on lasting ambitions to participate; people sign up because they view this forum as an attractive place to contribute, but the Internet offers no shortage of equivalent niches.

2NancyLebovitz11y
Added for completeness (I'm not sure immigration restrictions are a good idea): Have an invitation system.
2dbaupp11y
This isn't so ridiculous in short bursts. I know that Hacker News disables registration if/when they get large media attention to avoid a swathe of new only-mildly-interested users. A similar thing could happen here. (It might be enough to have an admin switch that just puts a display: hidden into the CSS for the "register" button; trivial inconveniences and all.)
2DanArmak11y
Since this started out with people complaining about others already here, we might be called upon to create an immigration police. You sir, show me your +500 karma badge!

It has occurred to me to wonder whether the poll might be biased. I wanted to add a summary of things that protect LessWrong against endless September when I wrote this post. However, I couldn't think of even one. I figured my thread to debate whether we should have better protection would have turned up any compelling reasons to think LessWrong is protected but it didn't.

I became curious about this just now wondering whether there really isn't a single reason to think that LessWrong is protected, and I re-read all of the comments (though not the replie... (read more)

I think we just need people to downvote more. Perhaps we could insist that you downvote one thing for every three things that you upvote?

9Decius11y
Weight each member's upvotes in a manner determined by the proportion of their votes in each direction and their total karma.
2Luke_A_Somers11y
This takes output as input. Would you go with a self-consistent result, make it time-inconsistent, or cut it off at one tier? A better solution, I think, would be to weight the karma change by the 'information' that this new vote provided if the order of votes was irrelevant - i.e. multiply it by min(1, log2(P)) with P being the fraction of the voter's votes that are of that vote type. So if Bob likes Alice's post when Bob likes everyone's posts, Alice doesn't get much from it. If Bob like's Alice's post when Bob likes half or fewer of all posts he votes on, Alice gets 1 full karma from it.
0Decius11y
Time-inconsistent. Nothing you do after you upvote should change the results. This risks having karma determined almost exclusively by how many posts are upvoted by the top elite. Perhaps a better solution could be found if we could establish all of the goals of the karma system. Why do we track users' total karma?
  • Eliezer documented the arrival of poseurs (people who superficially copycat cultural behaviors - they are reported to over-run subcultures) which he termed "Undiscriminating Skeptics".

If I understand Eliezer right, when he says "Undiscriminating Skeptics" he means the people who always favor academic status quo ideas and thus reject ideas like cryonics and the many world hypothesis.

When I read LessWrong, I seldom see comments that critize others for being undiscriminating skeptics. If LessWrong participants want to keep out undiscriminating skeptics than they should speak up against the practice a lot more than they currently do.


Endless September Poll:

I condensed the feedback I got in the last few threads into a summary of pros and cons of each solution idea if you would like to something for reference.


How concerned should we be about LessWrong's culture being impacted by:

...overwhelming user influx? [pollid:366]

...trending toward the mean? [pollid:367]

...some other cause? [pollid:368]

(Please explain the other causes in the comments.)


Which is the best solution for:

...overwhelming user influx?

(Assuming user is of right type/attitude, too many users for acculturation capacit... (read more)

You're focusing on negative reinforcement for bad comments. What we need is positive reinforcement for good comments. Because there are so many ways for a comment to be bad, discouraging any given type of bad comment will do effectively nothing to encourage good comments.

"Don't write bad posts/comments" is not what we want. "Write good posts/comments" is what we want, and confusing the two means nothing will get done.

6Viliam_Bur11y
We need to discourage comments that are not-good. Not just plainly bad. Only... not adding value, but still taking time to read. The time lost per one comment is trivial, but the time lost by reading thousand comments isn't. How long does it take LW to produce thousand comments? A few days at most. This article alone has about 100 comments. Did you get 100 insights from reading them?
0Epiphany11y
That's a good observation but for the record, the solution ideas were created by the group, not just me. If you want to see more positive reinforcement suggestions being considered, why not share a few of yours?

Why do the first three questions have four variations on the theme of "new users are likely to erode the culture" and nothing intermediate between that and "there is definitely no problem at all"?

Why ask for the "best solution" rather than asking "which of these do you think are good ideas"?

8FiftyTwo11y
Also, why is there no option for "new users are a good thing?" Maybe a diversity of viewpoints might be a good thing? How can you raise the sanity waterline by only talking to yourself?
7Epiphany11y
The question is asking you: "Assuming user is of right type/attitude, too many users for acculturation capacity." Imagine this: There are currently 13,000 LessWrong users (well more since that figure was for a few months ago and there's been a Summit since then) and about 1,000 are active. Imagine LesWrong gets Slashdotted - some big publication does an article on us, and instead of portraying LessWrong as "Cold and Calculating" or something similar to Wired's wording describing the futurology Reddit where SingInst had posted about AI "A sub-reddit dedicated to preventing Skynet" they actually say something good like "LessWrong solves X Problem". Not infeasible since some of us do a lot of research and test our ideas. Say so many new users join in the space of a month and there are now twice as many new active users as older active users. This means 2/3 of LessWrong is clueless, posting annoying threads, and acting like newbies. Suddenly, it's not possible to have intelligent conversation about the topics you enjoy on LessWrong anymore without two people throwing strawman arguments at you and a third saying things that show obvious ignorance of the subject. You're getting downvoted for saying things that make sense, because new users don't get it, and the old users can't compensate for that with upvotes because there aren't enough of them. THAT is the type of scenario the question is asking about. I worded it as "too many new users for acculturation capacity" because I don't think new users are a bad thing. What I think is bad is when there are an overwhelming number of them such that the old users become alienated or find it impossible to have normal discussions on the forum. Please do not confuse "too many new users for acculturation capacity" with "new users are a bad thing".
1Epiphany11y
Why do you not see the "eroded the culture" options as intermediate options? The way I see it is there are three sections of answers that suggest a different level of concern: 1. There's a problem. 2. There's some cultural erosion but it's not a problem (Otherwise you'd pick #1.) 3. There's not a problem. What intermediate options would you suggest? A. Because the poll code does not make check boxes where you select more than one. It makes radio buttons where you can select only one. B. I don't have infinite time to code every single idea. If more solutions are needed, we can do another vote and add the best one from that (assuming I have time). One thing at a time.
0Nornagest11y
The option I wanted to see but didn't was something along the lines of "somewhat, but not because of cultural erosion".
0Epiphany11y
Well, I did not imagine all the possibilities for what concerns you guys would have in order to choose verbiage sufficiently vague enough that those options would work as perfect catch-alls, but I did as for "other causes" in the comments, and I'm interested to see the concerns that people are adding like "EY stopped posting" and "We don't have enough good posters" which aren't about cultural erosion, but about a lapse in the stream of good content. If you have concerns about the future of LessWrong not addressed so far in this discussion, please feel free to add them to the comments, however unrelated they are to the words used in my poll.
0gjm11y
I have no particular opinion on what exactly should be in the poll (and it's probably too late now to change it without making the results less meaningful than they'd be without the change). But the sort of thing that's conspicuously missing might be expressed thus: "It's possible that a huge influx of new users might make things worse in these ways, or that it's already doing so, and I'm certainly not prepared to state flatly that neither is the case, but I also don't see any grounds for calling it likely or for getting very worried about it at this point." The poll doesn't have any answers that fit into your category 2. There's "very concerned" and "somewhat concerned", both of which I'd put into category 1, and then there's "not at all". Check boxes: Oh, OK. I'd thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told "Polls must have at least two choices". If anyone with the power to change the code is reading this, I'd like to suggest that removing this check would both simplify the code and make the system more useful. An obvious alternative would be to add checkbox polls, but that seems like it would be more work. [EDITED to add: Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn't by me.] [EDITED again to add: I see I got downvoted too. I'd be grateful if someone who thinks this comment is unhelpful could explain why; even after rereading it, it still looks OK to me.]
0Epiphany11y
Yes. I asked because my mind drew a blank on intermediate options between some problem and none. I interpreted some problem as being intermediate between problem and no problem. Ok, so your suggested option would be (to make sure I understand) something like "I'm not convinced either way that there's a problem or that there's no problem). Maybe what you wanted was more of a "What probability of a problem is there?" not "Is there a problem or not, is it severe or mild?" Don't know how I would have combined probability, severity and urgency into the same question, but that would have been cool. I considered that (before knowing about the two options requirement) but (in addition to the other two concerns) that would make the poll really long and full of repetition and I was trying to be as concise as possible because my instinct is to be verbose but I realize I'm doing a meta thread and that's not really appreciated on meta threads. Oh, thank you. (:
0Nornagest11y
It sounds like you could still work around it by making several yes/no agreement polls, although this would be clunky enough that I'd only recommend it for small question sets.
6Alicorn11y
It's the Center for Applied Rationality, not Modern Rationality.
-4Epiphany11y
No, actually, there is a "Center for Modern Rationality" which Eliezer started this year: http://lesswrong.com/lw/bpi/center_for_modern_rationality_currently_hiring/ Here is where they selected the name: http://lesswrong.com/lw/9lx/help_name_suggestions_needed_for_rationalityinst/5wb8 The reason I selected it for the poll is because they are talking about creating online training materials. It would be more effective to send someone to something online from a website than to send them somewhere IRL from a website as only half of us are in the same country.

No. You're wrong. They changed it, which you would know if you clicked my link.

4Luke_A_Somers11y
I don't see how clicking the link you posted would have actually demonstrated her wrong.
5Alicorn11y
Just as it didn't occur to her that the organization could have changed its name, it didn't occur to me that she could seriously think there were two of them.
-1Epiphany11y
We have both acknowledged our oversights now. Thank you.
0Epiphany11y
I thought there were two centers for rationality, one being the "Center for Modern Rationality" and the other being the "Center for Applied Rationality". Adding a link to one of them didn't rule out the possibility of there being a second one.

So, you assigned a higher probability to there being two organizations from the same people on the same subject at around the same time with extremely similar names and my correction being mistaken in spite of my immersion in the community in real life... than to you having out-of-date information about the organization's name?

-6Epiphany11y
1Armok_GoB11y
Proposed solution: add lots of subdivisions with different requirements.
1Epiphany11y
I had a couple of ideas like this myself and I chose to cull them before doing this poll for these reasons: The problem with splitting the discussions is that then we'd end up with people having the same discussions in multiple different places. The different posts would not have all the information, so you'd have to read several times as much in if you wanted to get it all. That would reduce the efficiency of the LessWrong discussions to a point where most would probably find it maddening and unacceptable. We could demand that users stick to a limited number of subjects within their subdivision, but then discussion would be so limited that user experience would not resemble participation in a subculture. Or, more likely, it just wouldn't be enforced thoroughly enough to stop people from talking about what they want, and the dreaded plethora of duplicated discussions would still result. The best alternative to this as far as I'm aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
1wedrifid11y
That seems like an inefficient use of CFAR training (and so an inefficient use of whatever resources that would have to be used to pay CFAR for such training). I'd prefer to just cull those disruptively bad at rational thinking entirely. Some people just cannot be saved (in a way that gives an acceptable cost/benefit ratio). I'd prefer to save whatever attention or resources I was willing to allocate to people-improvement for those that already show clear signs of having thinking potential.
4Armok_GoB11y
I am among those absolutely hardest to save, having an actual mental illness. Yet this place is the only thing saving me from utter oblivion and madness. Here is where I have met my only real friends ever. Here is the only thing that gives me any sense of meaning, reason to survive, or glimmer of hope. I care fanatically about it. Many of the rules that have been proposed. Or for that matter even the amount of degradation that has ALREADY occurred... If that had been the case a few years ago, I wouldn't exist, this body would either be rotting in the ground, or literally occupied by an inhuman monster bent on the destruction of all living things.
8Epiphany11y
I'm fascinated. (I'm a psychology enthusiast who refuses to get a psychology degree because I find many of the flaws with the psychology industry unacceptable). I am very interested in knowing how LessWrong has been saving you from utter oblivion and madness. Would you mind explaining it? Would it be alright with you if I ask you which mental illness? Would you please also describe the degradation that has occurred at LW?
1Armok_GoB11y
I'd rather not talk about it in detail, but it boils down to LW in general promoting sanity and connects smart people in general. That extra sanity can be used to cancel out insanity, not just creating super-sanes. Degradation: Lowered frequency of insightful and useful content, increased frequency of low quality content.
0Epiphany11y
I have to admit I am not sure whether to be more persuaded by you or Armok. I suppose what it would come down to is a cost/benefit calculation that takes into account the amount of destruction saved by the worst as well as the amount of benefit produced by the best. Brilliant people can have quite an impact indeed, but they are rare and it is easier to destroy than to create, so it is not readily apparent to me which group it would be more beneficial to focus on, or if both, in what amount. Practically speaking, though, CFAR has stated that they have plans to make web apps to help with rationality training and training materials for high schoolers. It seems to me that they have an interest in targeting the mainstream, not just the best thinkers. I'm glad that someone is doing this, but I also have to wonder if that will mean more forum referrals to LW from the mainstream...
0Armok_GoB11y
Ctrl+C, Ctrl+V, problem solved.
1Epiphany11y
If you're suggesting that duplicated discussions can be solved with paste, then you are also suggesting that we not make separate areas. Think about it. I suppose you might be suggesting that we copy the OP and not the comments. Often the comments have more content than the OP, and often that content is useful, informative and relevant. So, in the comments we'd then have duplicated information that varied between the two OP copies. So, we could copy the comments over to the other area... but then they're not separate... Not seeing how this is a solution. If you have some different clever way to apply Ctrl+C, Ctrl+V then please let me know.
0Armok_GoB11y
No, because only the top content in each area would be shared to the others.
0Eugine_Nier11y
This creates a trivial inconvenience.
0Armok_GoB11y
So add a "promote" button that basicaly does the same automatically.
1beoShaffer11y
I assign non-neglible probability to some cause that I not am not specifically aware of (sorta, but not exactly an outside context problem) having a negative impact on LW's culture.