Epiphany comments on Poll - Is endless September a threat to LW and what should be done? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (259)
You're focusing on negative reinforcement for bad comments. What we need is positive reinforcement for good comments. Because there are so many ways for a comment to be bad, discouraging any given type of bad comment will do effectively nothing to encourage good comments.
"Don't write bad posts/comments" is not what we want. "Write good posts/comments" is what we want, and confusing the two means nothing will get done.
We need to discourage comments that are not-good. Not just plainly bad. Only... not adding value, but still taking time to read.
The time lost per one comment is trivial, but the time lost by reading thousand comments isn't. How long does it take LW to produce thousand comments? A few days at most.
This article alone has about 100 comments. Did you get 100 insights from reading them?
That's a good observation but for the record, the solution ideas were created by the group, not just me.
If you want to see more positive reinforcement suggestions being considered, why not share a few of yours?
Why do the first three questions have four variations on the theme of "new users are likely to erode the culture" and nothing intermediate between that and "there is definitely no problem at all"?
Why ask for the "best solution" rather than asking "which of these do you think are good ideas"?
Also, why is there no option for "new users are a good thing?"
Maybe a diversity of viewpoints might be a good thing? How can you raise the sanity waterline by only talking to yourself?
The question is asking you:
"Assuming user is of right type/attitude, too many users for acculturation capacity."
Imagine this: There are currently 13,000 LessWrong users (well more since that figure was for a few months ago and there's been a Summit since then) and about 1,000 are active. Imagine LesWrong gets Slashdotted - some big publication does an article on us, and instead of portraying LessWrong as "Cold and Calculating" or something similar to Wired's wording describing the futurology Reddit where SingInst had posted about AI "A sub-reddit dedicated to preventing Skynet" they actually say something good like "LessWrong solves X Problem". Not infeasible since some of us do a lot of research and test our ideas.
Say so many new users join in the space of a month and there are now twice as many new active users as older active users.
This means 2/3 of LessWrong is clueless, posting annoying threads, and acting like newbies. Suddenly, it's not possible to have intelligent conversation about the topics you enjoy on LessWrong anymore without two people throwing strawman arguments at you and a third saying things that show obvious ignorance of the subject. You're getting downvoted for saying things that make sense, because new users don't get it, and the old users can't compensate for that with upvotes because there aren't enough of them.
THAT is the type of scenario the question is asking about.
I worded it as "too many new users for acculturation capacity" because I don't think new users are a bad thing. What I think is bad is when there are an overwhelming number of them such that the old users become alienated or find it impossible to have normal discussions on the forum.
Please do not confuse "too many new users for acculturation capacity" with "new users are a bad thing".
Why do you not see the "eroded the culture" options as intermediate options? The way I see it is there are three sections of answers that suggest a different level of concern:
What intermediate options would you suggest?
A. Because the poll code does not make check boxes where you select more than one. It makes radio buttons where you can select only one.
B. I don't have infinite time to code every single idea.
If more solutions are needed, we can do another vote and add the best one from that (assuming I have time). One thing at a time.
The option I wanted to see but didn't was something along the lines of "somewhat, but not because of cultural erosion".
Well, I did not imagine all the possibilities for what concerns you guys would have in order to choose verbiage sufficiently vague enough that those options would work as perfect catch-alls, but I did as for "other causes" in the comments, and I'm interested to see the concerns that people are adding like "EY stopped posting" and "We don't have enough good posters" which aren't about cultural erosion, but about a lapse in the stream of good content.
If you have concerns about the future of LessWrong not addressed so far in this discussion, please feel free to add them to the comments, however unrelated they are to the words used in my poll.
I have no particular opinion on what exactly should be in the poll (and it's probably too late now to change it without making the results less meaningful than they'd be without the change). But the sort of thing that's conspicuously missing might be expressed thus: "It's possible that a huge influx of new users might make things worse in these ways, or that it's already doing so, and I'm certainly not prepared to state flatly that neither is the case, but I also don't see any grounds for calling it likely or for getting very worried about it at this point."
The poll doesn't have any answers that fit into your category 2. There's "very concerned" and "somewhat concerned", both of which I'd put into category 1, and then there's "not at all".
Check boxes: Oh, OK. I'd thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told "Polls must have at least two choices". If anyone with the power to change the code is reading this, I'd like to suggest that removing this check would both simplify the code and make the system more useful. An obvious alternative would be to add checkbox polls, but that seems like it would be more work.
[EDITED to add: Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn't by me.]
[EDITED again to add: I see I got downvoted too. I'd be grateful if someone who thinks this comment is unhelpful could explain why; even after rereading it, it still looks OK to me.]
Yes. I asked because my mind drew a blank on intermediate options between some problem and none. I interpreted some problem as being intermediate between problem and no problem.
Ok, so your suggested option would be (to make sure I understand) something like "I'm not convinced either way that there's a problem or that there's no problem).
Maybe what you wanted was more of a "What probability of a problem is there?" not "Is there a problem or not, is it severe or mild?"
Don't know how I would have combined probability, severity and urgency into the same question, but that would have been cool.
I considered that (before knowing about the two options requirement) but (in addition to the other two concerns) that would make the poll really long and full of repetition and I was trying to be as concise as possible because my instinct is to be verbose but I realize I'm doing a meta thread and that's not really appreciated on meta threads.
Oh, thank you. (:
It sounds like you could still work around it by making several yes/no agreement polls, although this would be clunky enough that I'd only recommend it for small question sets.
It's the Center for Applied Rationality, not Modern Rationality.
No, actually, there is a "Center for Modern Rationality" which Eliezer started this year:
http://lesswrong.com/lw/bpi/center_for_modern_rationality_currently_hiring/
Here is where they selected the name:
http://lesswrong.com/lw/9lx/help_name_suggestions_needed_for_rationalityinst/5wb8
The reason I selected it for the poll is because they are talking about creating online training materials. It would be more effective to send someone to something online from a website than to send them somewhere IRL from a website as only half of us are in the same country.
No. You're wrong. They changed it, which you would know if you clicked my link.
I don't see how clicking the link you posted would have actually demonstrated her wrong.
Just as it didn't occur to her that the organization could have changed its name, it didn't occur to me that she could seriously think there were two of them.
We have both acknowledged our oversights now. Thank you.
I thought there were two centers for rationality, one being the "Center for Modern Rationality" and the other being the "Center for Applied Rationality". Adding a link to one of them didn't rule out the possibility of there being a second one.
So, you assigned a higher probability to there being two organizations from the same people on the same subject at around the same time with extremely similar names and my correction being mistaken in spite of my immersion in the community in real life... than to you having out-of-date information about the organization's name?
The possibility that the organization had changed it's name did not occur to me. I wish you would have just said "It changed it's name."
As for why I did not assume you knew better than me: The fact that the article was right there talking about the "Center for Modern Rationality" contradicted your information.
I have never met an infallible person, so in the event that I have information that contradicts yours, I will probably think that you're wrong.
It's nice when all the possibilities for why my information contradicts others occurs to me so that I can do something like go search for whether the name of an organization was changed, but that doesn't always happen.
If you knew that it used to be called "Center for Modern Rationality" and changed it's name to "Center for Applied Rationality" why did you not say "It changed it's name."?
I've noticed a pattern with you: Your responses are often missing some contextual information such that I respond in a way that contradicts you. I think you would find me less frustrating if you provided more context.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
Established users can be wrong about many things, including domain-specific concepts or facts.
A more general heuristic that I do endorse, from Cromwell:
Agreed. That's easier. However, sometimes the easier way is not the correct way.
In a world where the authoritative "facts" can be wrong more often than they're right, scientists often take a roughly superstitious approach to science and the educational system isn't even optimized for the purpose of educating what reason do I have to believe that any authority figure or expert or established user is more likely to be correct?
I wish I could trust other's information. I have wished that my entire life. It is frequently exhausting and damn hard to question this much of what people say. But I want to be correct, not merely pleasant, and that's life.
Eliezer intended for us to question authority. I'd have done it anyway because I started doing that ages ago. But he said in no uncertain terms that this is what he wants:
In Two More Things to Unlearn from School he warns his readers that "It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism."
In Cached Thoughts he tells you to question what HE says. "Now that you've read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you'll think, "Cached thoughts." My belief is now there in your mind, waiting to complete the pattern. But is it true? Don't let your mind complete the pattern! Think!"
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
I'm not saying that a hypothetical vague "you" shouldn't question things. I'm saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
You might think about the reasons people have for saying the things they say. Why do people make false statements? The most common reasons probably fall under intentional deception ("lying"), indifference toward telling the truth ("bullshitting"), having been deceived by another, motivated cognition, confabulation, or mistake. As you've noticed, scientists and educators can face situations where complete integrity and honesty comes into conflict with their own career objectives, but there's no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There's also no apparent motivation for Alicorn to bullshit or confabulate; if she isn't quite sure she remembers the name, she doesn't have anything to lose by simply moving on without commenting, nor does she have much to gain by getting away with posting the wrong name. That leaves the possibility that she has the wrong name by an unintended mistake. But different people's chances of making a mistake are not necessarily equal. By being more directly involved with the organization, Alicorn has had many more opportunities to be corrected about the name than you have. That makes it much more likely that you are the one making the mistake, as turned out to be the case.
You could phrase your questions as questions rather than statements. You could also take extra care to confirm your facts before you preface a statement with "no, actually".
This seems like a risky heuristic to apply generally, given the volume of domain-specific contrarianism floating around here. My own version is more along the lines of "trust, but verify".
It's a specific problem Epiphany has that she assumes her own internal monologue of what's true is far more reliable than any evidence or statements to the contrary.
I assign non-neglible probability to some cause that I not am not specifically aware of (sorta, but not exactly an outside context problem) having a negative impact on LW's culture.
Proposed solution: add lots of subdivisions with different requirements.
I had a couple of ideas like this myself and I chose to cull them before doing this poll for these reasons:
The problem with splitting the discussions is that then we'd end up with people having the same discussions in multiple different places. The different posts would not have all the information, so you'd have to read several times as much in if you wanted to get it all. That would reduce the efficiency of the LessWrong discussions to a point where most would probably find it maddening and unacceptable.
We could demand that users stick to a limited number of subjects within their subdivision, but then discussion would be so limited that user experience would not resemble participation in a subculture. Or, more likely, it just wouldn't be enforced thoroughly enough to stop people from talking about what they want, and the dreaded plethora of duplicated discussions would still result.
The best alternative to this as far as I'm aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
That seems like an inefficient use of CFAR training (and so an inefficient use of whatever resources that would have to be used to pay CFAR for such training). I'd prefer to just cull those disruptively bad at rational thinking entirely. Some people just cannot be saved (in a way that gives an acceptable cost/benefit ratio). I'd prefer to save whatever attention or resources I was willing to allocate to people-improvement for those that already show clear signs of having thinking potential.
I am among those absolutely hardest to save, having an actual mental illness. Yet this place is the only thing saving me from utter oblivion and madness. Here is where I have met my only real friends ever. Here is the only thing that gives me any sense of meaning, reason to survive, or glimmer of hope. I care fanatically about it.
Many of the rules that have been proposed. Or for that matter even the amount of degradation that has ALREADY occurred... If that had been the case a few years ago, I wouldn't exist, this body would either be rotting in the ground, or literally occupied by an inhuman monster bent on the destruction of all living things.
I'm fascinated. (I'm a psychology enthusiast who refuses to get a psychology degree because I find many of the flaws with the psychology industry unacceptable). I am very interested in knowing how LessWrong has been saving you from utter oblivion and madness. Would you mind explaining it? Would it be alright with you if I ask you which mental illness?
Would you please also describe the degradation that has occurred at LW?
I'd rather not talk about it in detail, but it boils down to LW in general promoting sanity and connects smart people in general. That extra sanity can be used to cancel out insanity, not just creating super-sanes.
Degradation: Lowered frequency of insightful and useful content, increased frequency of low quality content.
I have to admit I am not sure whether to be more persuaded by you or Armok. I suppose what it would come down to is a cost/benefit calculation that takes into account the amount of destruction saved by the worst as well as the amount of benefit produced by the best. Brilliant people can have quite an impact indeed, but they are rare and it is easier to destroy than to create, so it is not readily apparent to me which group it would be more beneficial to focus on, or if both, in what amount.
Practically speaking, though, CFAR has stated that they have plans to make web apps to help with rationality training and training materials for high schoolers. It seems to me that they have an interest in targeting the mainstream, not just the best thinkers.
I'm glad that someone is doing this, but I also have to wonder if that will mean more forum referrals to LW from the mainstream...
Ctrl+C, Ctrl+V, problem solved.
If you're suggesting that duplicated discussions can be solved with paste, then you are also suggesting that we not make separate areas.
Think about it.
I suppose you might be suggesting that we copy the OP and not the comments. Often the comments have more content than the OP, and often that content is useful, informative and relevant. So, in the comments we'd then have duplicated information that varied between the two OP copies.
So, we could copy the comments over to the other area... but then they're not separate...
Not seeing how this is a solution. If you have some different clever way to apply Ctrl+C, Ctrl+V then please let me know.
No, because only the top content in each area would be shared to the others.
This creates a trivial inconvenience.
So add a "promote" button that basicaly does the same automatically.