Our Phyg Is Not Exclusive Enough
EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg
Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It's nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.
Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don't address the community's best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.
I'm sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don't address that, and no one really calls them out. I could dig up a bunch of quotes but I don't want to single anyone out or make this about any particular point, so I'm leaving it up to your imagination/memory.
It's actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that's actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.
It's worse than just "oh here's some more junk I have to ignore or downvote", because the path of least resistance ends up being "ignore any discussion that contains contradictions of the lesswrong scriptures", which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it's quite hard to tell the difference between that and someone explaining for the millionth time the problem with "but won't the AI know what's right better than humans?". So I just ignore it all and miss a lot of good stuff.
Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I'd like there to be something similar for discussion/comments. Some way of knowing "these people I'm talking to know what they are talking about" without having to dig around in their user history or whatever. I'm not proposing a particular solution here, just saying I'd like there to be more high quality discussion between more properly sequenced LWers.
There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking "damn, I wish we were more like that". A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?
I've recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I've also seen a good deal of complaints about our phygish exclusivity, the politics ban, the "talk to me when you read the sequences" attitude, and so on. I'd just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.
Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (513)
When EY are writing the sequences what percentage of population he was hoping to influence? I suppose a lot. Then now some people are bothered because the message began to spread and in the meantime the quality of posts are not the same. Well, if the discussion become poor, go to another places. High technical guys simple don't get involve in something they see is hopeless or not interesting, like trying to turn people more rational or reduce x-risks.
I haven't read most of the sequences yet and agree with most of what those lw members are saying of who you'd like to see more of.
Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Douglas Hofstadter. Quite a few of them have read all of the sequences too.
Here is an example from yesterday. I told an AI researcher about a comment made on lw (don't worry possible negative influence, they are already well aware of everything and has read the sequences). Here is part of the reply:
...
I would usually rephrase this at some point and post it as a reply.
And this is just one of many people who simply don't bother to get into incredible exhausting debates with a lesswrong mob.
Without me your impression that everyone agrees with you would be even worse. And by making this community even more exclusive you will get even more out of touch with reality.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven't even read your scriptures. Guess again!
That post is part of the reason I made this post. Shit like this from the OP there:
!!!
I don't expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.
Can you provide some examples of interesting disagreement with the LW materials that was acknowledged as such by those who wrote the content or believe that it is correct?
Um, you seem to me to be saying that someone (davidad) who is in fact familiar with the sequences, and who left AI to achieve things well past most of LW's participants, are a perfect example of who you don't want here. Is that really what you meant to put across?
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn't incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
Upvoted.
I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.
There are those who think rationality needs to be learned through osmosis or whatever. That's fine, but I don't want it lowering the quality of discussion here.
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.
An IRC conversation I had a while ago left me with a powerful message: people will give lip service to keeping the gardens, but when it comes time to actually do it, nobody is willing to.
Demanding that people read tomes of text before you're willing to talk to them seems about the easiest way imaginable to silence any possible dissent. Anyone who disagrees with you won't bother to read your holy books, and anyone who hasn't will be peremptorily ignored. You're engaging in a pretty basic logical fallacy in an attempt to preserve rationality. Engage the argument, not the arguer.
Expecting your interlocutors to have a passing familiarity with the subject under discussion is not a logical fallacy.
There's ways to have a passing familiarity with rational debate that don't involve reading a million words of Eliezer Yudkowsky's writings.
That has nothing to do with whether or not something you believe to be a logical fallacy is or is not.
I'm insulted (not in an emotional way! I just want to state my strong personal objection!). Many of us challenge the notion of "progress" being possible or even desirable on topics like Torture vs Specks. And while I've still much to learn, there are people like Konkvistador, who's IMO quite adept at resisting the lure of naive utilitarianism and can put a "small-c conservative" (meaning not ideologically conservative, but technically so) approach to metaethics to good use.
Um, after I read the sequences I ploughed through every LW post from the start of LW to late 2010 (when I started reading regularly). What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored. (Though it's probably just me.)
At this point A Group Is Its Own Worst Enemy is apposite. Note the description of the fundamentalist smackdown as a stage communities go through. Note it also usually fails when it turns out the oldtimers have differing and incompatible ideas on what the implicit constitution actually was in the good old days.
tl;dr declarations of fundamentalism heuristically strike me as inherently problematic.
edit: So what about this comment rated a downvote?
edit 2: ah - the link to the Shirky essay appears to be giving the essay in the UK, but Viagra spam in the US o_0 I've put a copy up here.
I don't consider myself a particularly patient person when it comes to tolerating ignorance or stupidity but even so I don't much mind if people here contribute without having done much background reading. What matters is that they don't behave like an obnoxious prat about it and are interested in learning things.
I do support enforcing high standards of discussion. People who come here straight from their highschool debate club and Introduction to Philosophy 101 and start throwing around sub-lesswrong-standard rhetoric should be downvoted. Likewise for confident declarations of trivially false things. There should be more correction of errors that would probably be accepted (or even rewarded) in many other contexts. These are the kind of thing that don't actively exclude but do have the side effect of raising the barrier to entry. A necessary sacrifice.
The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
Point taken. There is certainly a lack along those lines.
Isn't the metaethics sequence not liked very much? I haven't read it in a while, and so I'm not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche's moral thought.
Downvoting people for not understanding that beliefs constrain expectation I'm okay with. Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.
After I read it I was like, "Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it's OK that it's not supernatural or 'objective', and we don't have to 'justify' it to an ideal philosophy student of perfect emptyness". Fake utility functions, and Recursive justification stuff helped.
Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Try actually applying it to some real life situations and you'll quickly discover the problems with it.
such as?
There's a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy "either there's a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right". It's not immediately supposed to solve "So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?"
For the second question, we'd want to add an Ethics Sequence (in my opinion, Yvain's Consquentialism FAQ lays some good groundwork for one).
Hm. I think I'll put on my project list "reread the metaethics sequence and create an intelligent reply." If that happens, it'll be at least two months out.
Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret "belief" in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs "should" be about anticipation, but LessWrong, like everybody else, can't reliably separate descriptive and normative claims, which is exactly why this "beliefs constrain anticipation" thing is misleading. ...There's a neat level-crossing thingy in there.
EY thinking of meta-ethics as a "solved problem" is one of the most obvious signs that he's very spotty when it comes to philosophy and can't really be trusted to do AI theory.
(Apologies if I come across as curmudgeonly.)
This is a pretty hardcore assertion.
I am thinking of lukeprog's and Yvain's stuff as counterexamples.
I think of them (and certain others) as exceptions that prove the rule. If you take away the foundation of the sequences and the small number of awesome people (most of whom, mind you, came here because of Eliezer's sequences), you end up with a place that's indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.
Standard disclaimer edit because I have to: The exceptions don't prove the rule in the sense of providing evidence for the rule (indeed, they are technically evidence contrariwise), but they do allow you to notice it. This is what the phrase really means.
Exceptions don't prove rules.
You are mostly right, which is exactly what I was getting at with the "promoted is the only good stuff" comment.
I do think there is a lot of interesting, useful stuff outside of promoted, tho, it's just mixed with the usual programmer/atheist/transhumanist/etc-level stuff.
Considering how it was subculturally seeded, this should not be surprising. Remember that LW has proceeded in a more or less direct subcultural progression from the Extropians list of the late '90s, with many of the same actual participants.
It's an online community. As such, it's a subculture and it's going to work like one. So you'll see the behaviour of an internet forum, with a bit of the topical stuff on top.
How would you cut down the transhumanist subcultural assumptions in the LW readership?
(If I ever describe LW to people these days it's something like "transhumanists talking philosophy." I believe this is an accurate description.)
Transhumanism isn't the problem. The problem is that when people don't read the sequences, we are no better than any other forum of that community. Too many people are not reading the sequences, and not enough people are calling them out on it.
I'd always thought they prove the rule in the sense of testing it.
<shameless self-promotion> My recent post explains how to get true beliefs in situations like the anthropic trilemma, which post begins with the words "speaking of problems I don't know how to solve." </shameless self-promotion>
However, there is a bit of a remaining problem, since I don't know how to model the wrong way of doing things (naive application of Bayes' rule to questionable interpretations). well enough to tell whether it's fixable or not, so although the problem is solved, it is not dissolved.
I quietly downvoted your post when you made it for its annoying style and because I didn't think it really solved any problems, just asserted that it did.
What could I do to improve the style of my writing?
First they came for the professional philosophers,
and I didn't speak out because I wasn't a professional philosopher.
Then they came for the frequentists,
and I didn't speak out because I wasn't a frequentist.
Then they came for the AI skeptics,
and I didn't speak out because I wasn't skeptical of AI.
and then there was no one left to talk to.
I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any productive debate could occur. People who wholeheartedly agree with each other tend not to debate.
Oh, and by the way, there are other strategies for dispelling the public perception of your community being a "phyg", besides using rot13. Not being an ultra-exclusive "phyg" is one of such strategies. If you find yourself turning to rot13 instead, then IMO the battle has already been lost.
A 'debate club' mindset is one of the things I would try to avoid. Debates emerge when there are new ideas to be expressed and new outlooks or bodies of knowledge to consider - and the supply of such is practically endless. You don't go around trying to artificially encourage an environment of ignorance just so some people are sufficiently uninformed that they will try to argue trivial matters. That's both counterproductive and distasteful.
I would not be at all disappointed if a side effect of maintaining high standards of communication causes us to lose some participants who "come to Less Wrong specifically for the debates". Frankly, that would be among the best things we could hope for. That sort of mindset is outright toxic to conversations and often similarly deleterious to the social atmosphere.
I wasn't suggesting we do that, FWIW.
I think there's a difference between flame wars and informed debate. I'm in favor of the latter, not the former. On the other hand, I'm not a big fan of communities where everyone agrees with everyone else. I acknowledge that they can be useful as support groups, but I don't think that LW is a support group, nor should it become one. Rationality is all about changing one's beliefs, after all...
Debate is a tool for achieving truth. Why is that such a terrible thing?
I didn't say it was. Please read again.
You said that we should avoid debate because it's bad for the social atmosphere. I'm not seeing much difference.
No I didn't. I said we should avoid creating a deliberate environment of ignorance just so that debate is artificially supported. To the extent that debate is a means to an end it is distinctly counterproductive to deliberately sabotage that same end so that more debate is forced.
See also: Lost purpose.
Upon rereading, I think I see what you're getting at, but you seem to be arguing from the principle that creating ignorance is the preferred way to create debate. That seems ahem non-obvious to me. There's no shortage of topics where informed debate is possible, and seeking to debate those does not require(and, in fact, generally works against) promoting ignorance. Coming here for debate does not imply wanting to watch an intellectual cripplefight.
I seem to be coming from a position of making a direct reply to Bugmaster with the specific paragraph I was replying to quoted. That should have made the meaning more obvious to you.
Which is what I myself advocated with:
I don't see why having the debate at a higher level of knowledge would be a bad thing. Just because everyone is familar with a large bit of useful common knowledge doesn't mean no-one disagrees with it, or that there is nothing left to talk about. There are some LW people who have read everything and bring up interesting critiques.
Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting. Having read the sequences it's the same sort of boring as someone explaining for the millionth time that "no, technological progress or happyness is not a sufficient goal to produce a valuable future, and yes, an AI coded with that goal would kill us all, and it would suck".
The point of my post was that that is not an acceptable solution.
Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:
No, it's not very interesting to you and me, but to the "uneducated folks" whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it's easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
What's your solution, then ? You say,
To me, "more exclusive LW" sounds exactly like the kind of solution that doesn't work, especially coupled with "enforcing a little more strongly that people read the sequences" (in some unspecified yet vaguely menacing way).
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Read map and territory, and understand the way of Bayes.
The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there's no qualitative difference between opinion and fact.
I don't have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don't want a particular type of solution (relaxing LW's phygish standards), doesn't mean I support any particular straw-solution.
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can't tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO.
See, it's exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words.
I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, "rocks tend to fall down" is a fact, but "the Singularity is imminent" is an opinion -- in my opinion -- and so is "we should kick out anyone who hadn't read the entirety of the Sequences".
When you said "we should make LW more exclusive", what did you mean, then ?
In any case, I do have a solution for you: why don't you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own "phyg" and make it as exclusive as you want.
I mean that I'd like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don't think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I'm not afraid of being a phyg.
That's all, nothing revolutionary.
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn't enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is.
I'm not afraid of it either, I just don't think that power-sliding down a death spiral is a good idea. I don't need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
Specifically 'the way of'. Would you have the same objection with 'and understand how bayesian updating works'? (Objection to presumptuousness aside.)
Probably. The same sentiment could be expressed as something like this:
This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
Educating people is like that!
What I would have said about the phrasing is that it is wrong.
Merely telling people that they aren't worthy is not very educational; it's much better to tell them why you think they aren't worthy, which is where the links come in.
Sure, but I have no problem with people being wrong, that's what updating is for :-)
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points - your retort seems odd.
Huh again? You seemed to have missed a level of abstraction.
I think the barrier of entry is high enough - the signal-to-noise ratio is high, and if you only read high-karma posts and comments you are guaranteed to get substance.
As for forcing people to read the entire Sequences, I'd say rationalwiki's critique is very appropriate (below). I myself have only read ~20% of the Sequences, and by focusing on the core sequences and highlighted articles, have recognized all the ideas/techniques people refer to in the main-page and discussion posts.
Downvoted for linking to that site.
... what?
It's both funny and basically accurate. I'd say it's a perfectly good link.
David is making a joke, because he wrote most of the content of that article.
Oh, that explains a lot!
Tetronian started the article, so it's his fault actually, even if he's pretty much moved here.
I have noted before that taking something seriously because it pays attention to you is not in fact a good idea. Every second that LW pays a blind bit of notice to RW is a second wasted.
See also this comment on the effects of lack of outside world feedback, and a comparison to Wikipedia (which basically didn't get any outside attention for four or five years and is now part of the infrastructure of society, at which I still boggle).
And LW may or may not be pleased that even on RW, when someone fails logic really badly the response is often couched in LW terms. So, memetic infections ahoy! Think of RW as part of the Unpleasable Fanbase.
Memetic hazard warning!
ITYM superstimulus ;-)
Kahneman and Tversky's Thinking Fast and Slow is basically the sequences + some statistics - AI and metaethics in (shorter) book form (well actually, the other way around, as the book was there first). So perhaps we should say "read the sequences, or that book, or otherwise learn the common mistakes".
Can someone verify this for me? I've heard good things about the authors but my prior for that book containing everything in the (or most of the) sequences is rather low.
The reasoning for downvote on this suggestion is not clear. What does the downvoter actually want less of?
You should try reading the other 80% of the sequences.
Fuck you.
As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY's key points used later on identity and decision theory. So most LW readers seem not to have read it.
Suggestion: a straw poll on who's read which sequences.
A poll would be good.
I've read the QM sequence and it really is one of the most important sequences. When I suggest this at meetups and such, people seem to be under the impression that it's just Eliezer going off topic for a while and totally optional. This is not the case, the QM sequence is used like you said to develop a huge number of later things.
Something I recall noticing at the time I read said posts is that some of the groundwork you mention didn't necessarily need to be in with the QM. Sure, there are a few points that you can make only by reference to QM but many of the points are not specifically dependent on that part of physics. (ie. Modularization fail!)
That there are no individual particles is something of philosophical import that it'd be difficult to say without bludgeoning the point home, as the possibility is such a strong implicit philosophical assumption and physics having actually delivered the smackdown may be surprising. But yeah, even that could be moved elsewhere with effort. But then again, the sequences are indeed being revised and distilled into publishable rather than blog form ...
Yes, that's the one thing that really relies on it. And the physics smackdown was surprising to me when I read it.
Ideal would seem to be having the QM sequence then later having an identity sequences wherein one post does an "import QM;".
Of course the whole formal 'sequence' notion is something that was invented years later. These are, after all, just a stream of blog posts that some guy spat out extremely rapidly. At that time they were interlinked as something of a DAG, with a bit of clustering involved for some of the bigger subjects.
I actually find the whole 'sequence' focus kind of annoying. In fact I've never read the sequences. What I have read a couple of times is the entire list of blog posts for several years. This includes some of my favorite posts which are stand alone and don't even get a listing in the 'sequences' page.
I've read it, but I took away less from it than any of the other sequences. Reading any of the other sequences, I can agree or disagree with the conclusion and articulate why. With the QM sequence, my response is more along the lines of "I can't treat this as very strong evidence of anything because I don't think I'm qualified to tell whether it's correct or not." Eliezer's not a physicist either, although his level of fluency is above mine, and while I consider him a very formidable rationalist as humans go, I'm not sure he really knows enough to draw the conclusions he does with such confidence.
I've seen the QM sequence endorsed by at least one person who is a theoretical physicist, but on the other hand, I've read Mitchell Porter's criticisms of Eliezer's interpretation and they sound comparably plausible given my level of knowledge, so I'm not left thinking I have much more grounds to favor any particular quantum interpretation than when I started.
I've seen enough of the QM sequence and know enough QM to see that Eliezer stopped learning quantum mechanics before getting to density matrices. As a result, the conclusions he draws from QM rely on metaphysical assumptions and seem rather arbitrary if one knows more quantum mechanics. In the comments to this post Scott Aaronson tries to explain this to Eliezer without much success.
Working on it.
In all seriousness though, I often find the Sequences pretty cumbersome and roundabout. Eliezer assumes a pretty large inferential gap for each new concept, and a lot of the time the main point of an article would only need a sentence or two for it to click for me. Obviously this makes it more accessible for concepts that people are unfamiliar with, but right now it's a turn-off and definitely is a body of work that will be greatly helped by being compressed into a book.
Stop using that word.
What word?
The only word that shouldn't be used for reasons that extend to not even identifying it. (google makes no use/mention distinction).
"In a riddle whose answer is chess, what is the only prohibited word?"
In fact, edit your post now please Nyan. Apart from that it's an excellent point. "Community", "website" or just about anything else.
"You're a ...." is already used as a fully general counterargument. Don't encourage it!
Upvoted for agreeing and for reminding me to re-read a certain part of the sequences. I loath fully general counterarguments, especially that one.
That being said, would it be appropriate for you to edit your own comment to remove said word? I don't know (to any signifigant degree) how Google's search algorithms work, but I suspect that having that word in your comment also negatively affects the suggested searches.
Oh, yeah, done.
I want to keep the use of the word, but to hide it from google I have replaced it with it's rot13: phyg
The C-word is still there in the post URL!
Well, yes and no.
That's much better!
(I hadn't realised the post titles were redundant in Reddit code ...)
And now we can all relax and have a truly uninhibited debate about whether LW is a phyg. Who would have guessed that rot13 has SEO applications?
Just to be clear, we're all reading it as-is and pronouncing it like "fig", right? Because that's how i read it in my head.
I hope so, or this would make even less sense than it should.
I've been pronouncing it to rhyme with the first syllable in "tiger".
Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step. Minus the discussion about definitions of the word phyg, what exactly are talking about?
When a tree falls down in a LessWrong forest, why there is a "sound":
Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.
Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.
Because people on LW pretend they know some things better that everyone else, and that's an open challenge that someone should go and kick their butts, preferably literally. Only strong or popular people are allowed to appear better. What's worse, people on LW have the courage to disagree even with some popular people, and that's pretty much insane.
When a tree falls down in a LessWrong forest, why there isn't a "sound":
There are no known examples of families broken when a family member refuses to submit to eternal knowledge of the Scriptures. (Unless such stories are censored here, of course.)
There are no known examples of violence or blackmail towards a former LW participant who decided to stop reading LW. (Unless such stories are censored here, of course.)
Minus the typical internet procrastination, there are no known examples of people who have lost years of their time and thousands of dollars, ruined their social and professional lives in their blind following of the empty promises LW gave them. (Unless such stories are censored here, of course.)
What next? Any other specific accusations? If no, why in the name of the mighty Cthulhu are we even worrying about the phyg-stuff? Just because someone may find throwing such accusations funny? Are we that prone to trolling?
Let's talk about more fruitful topic, such as: "is there a way to make Sequences more accessible to a newcomer?"
No, that isn't it. LW isn't at all special in that respect - a huge number of specialized communities exist on the net which talk about "crazy stuff", but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that's not really warranted.
No, that isn't it. Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird. The Khan academy is challenging the school system, and nobody thinks they're phyggish. Attempts to change the way people think are utterly commonplace, both small-scale and large-scale. And the part about books and public libraries is just weird (what?).
Unwarranted applause lights again. Everybody pretends they know some things better than everyone else. Certainly any community does that rallies around experts on some particular topic. With "preferably literally" you cross over into the whining victimhood territory.
The self-pandering here is particularly strong, almost middle-school grade stuff.
You've done a very poor job trying to explain why LW is accused of being phyggish.
This, on the other hand, is a great, very strong point that everyone who finds themselves wary of (perceived or actual) phyggishness on LW should remind themselves of. I'm thinking of myself in particular, and thank you for this strong reminder, so forcefully phrased. I have to be doing something wrong, since I frequently ponder about this or that comment on LW that seems to exemplify phyggish thinking to me, but I never counter to myself with something like what I just quoted.
redacted
Seriously?
Which part of my comment are you incredulous about?
That nobody sees self-help books as weird or cultlike.
It's not the Googleability of "phyg". One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong's potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I'm not sure this succeeded in reassuring them.)
But, y'know, if you don't want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.
Edit: Technically just sabotage, not "terrorism" per se. Not that that would assuage qualms non-negligibly.
On your last point, I have to cite our all-*cough*-wise Professor Quirrell
Nevermind that there were no actual plans for destroying fabs, and that the whole "terrorist plot" seems to be a collective hallucination.
Nevermind that the author in question has exhaustively argued that terrorism is ineffective.
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.
Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!")
LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Yeah, but he didn't do it right there in that essay. And saying "AI is dangerous, stopping Moore's Law might help, here's how fragile semiconductor manufacture is, just saying" still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.
You're pointing out it doesn't technically say that, but multiple people coming to that essay have taken it that way. You can say "ha! They're wrong", but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.
Yes, I accept that it's a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I'm not saying anything like "Ha! They're wrong!" I'm lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.
"Just saying" is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn't intend lunatics to kill them - if you say "we were just saying", the courts say "no you really weren't."
We don't have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko's Forbidden Thread), but "just saying" in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.
As far as I can tell, "just saying" is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don't recall saying anything about disclaimers, either.
So what are you really trying to say here?
It's a name for the style of argument: that it's not advocating people do these things, it's just saying that uFAI is a problem, slowing Moore's Law might help and by the way here's the vulnerabilities of Intel's setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.
Note that I am not intending to claim that the implication was Gwern's original intention (as I note way up there, I don't think it is); I'm saying it's a property of the text as rendered. And that me saying it's a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they're adding 2 and 2 and getting 666.
I understood "just saying" as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience's inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn't something you can take for granted.
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you're going to reason about impressions (I'm not saying that you should, it's very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
I agree that it's not fair to blame LW posters for the problem. However, I can't think of any route to patching the problem that doesn't involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
Anyway, we shouldn't make it too easy for people to get the false conlusion, and we should err on side of caution.
Having said this, I join your lamentations.
Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match... and then those which don't.
My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only "weird" is pretty normal. Being "weird + thorough", for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.
The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying "yeah, we read sci-fi about parallel universes, and we know it's not real, ha-ha silly us" and saying "actually, our intepretation of quantum physics is right, and you are wrong, that's the fact, no excuses". This is the part that makes people angry. You are allowed to take the position of authority only if you are a socially accepted authority. (A university professor is allowed to speak about quantum physics in this manner, a CEO is allowed to speak about money this way, a football champion is allowed to speak about football this way, etc.) This is breaking a social rule, and it has consequences.
A self-help book is safe. A self-help organization, not so much. (I mean an organization of people trying to change themselves, such as Alcoholics Anonymous, not a self-help publishing/selling company.)
They are supplementing the school system, not criticizing it. The schools can safely ignore them. Khan Academy is admired by some people, but generally it remains at the bottom of the pecking order. This would change for example if they started openly criticizing the school system, and telling people to take their children away from schools.
Generally I think that when people talk about phygs, the reason is that their instinct is saying: "inside of your group, a strong subgroup is forming". A survival reaction is to call attention of the remaining group members to destroy this subgroup together before it becomes strong enough. You can avoid this reaction if the subgroup signals weakness, or if it signals loyalty to the currect group leadership; in both cases, the subgroup does not threaten existing order.
Assuming this instinct is real, we can't change it; we can just avoid triggering the reaction. How exactly? One way is to signal harmlessness; but this seems incompatible with our commitment to truth and the spirit of tsuyoku naritai. Other way is to fall below radar by using an obscure technical speach; but this seems incompatible with our goal of raising the sanity waterline (we must be comprehensive to public). Yet other way is to signal loyalty to the regime, such as Singularity Institute publishing in peer-reviewed journals. Even this is difficult, because irrationality is very popular, so by attacking irrationality we inevitable attack many popular things. We should choose our battles wisely. But this is the way I would prefer. Perhaps there is yet another way that I forgot.
If the phyg-meme gets really bad we can just rename the site "lessharmful.com".
I know you say that you don't want to end up with "ignore any discussion that contains contradictions of the lesswrong scriptures", but it sounds a bit like that. (In particular, referring to stuff like "properly sequenced LWers" suggests to me that you not only think that the sequences are interesting, but actually right about everything). The sequences are not scripture, and I think (hope!) there are a lot of LWers who disagree to a greater or lesser degree with them.
For example, I think the metaethics sequence is pretty hopeless (WARNING: Opinion based on when I last read it, which was over a year ago). Fortunately, I don't think much of the discussion here has actually hinged upon Eliezer's metaethics, so I don't think that's actually too much of an issue.
I'm not even that worried about a convinced Yudkowsky disciple "righteously wielding the banhammer"; I suspect people making intelligent points wouldn't get banned, but you seem to be suggesting that they should be ignored.
Perhaps a more constructive approach would just be to list any of the particularly salient assumptions you're making at the start of the post? e.g. "This post assumes the Metaethics sequence; if you disagree with that, go argue about it somewhere else"
Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.
Sounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher 'levels' considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about 'free will' with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won't be able to.
For this to be implemented, we'd need a user rights system with the respective discussion sections as well as a way to determine the 'level' of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don't give any feedback about the scoring other than 'you leveled up!' and 'Your score wasn't good enough, re-read these sequences:__ and try again later.'
And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.
Instead of setting up gatekeepers, why not let people sort themselves first?
No one wants to be a bozo. We have different interests and aptitudes. Set up separate forums to talk about the major sequences, so there's some subset of the sequences you could read to get started.
I'd suggest too that as wonderful as EY is, he is not the fount of all wisdom. Instead of focusing on getting people to shut up, how about focusing on getting people to add good ideas that aren't already here?
Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I'm interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I've read most of the core posts of LW, but haven't gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I've had several ideas for posts (in Discussion) that I don't post, but I don't think it meets the community's expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the "raising the sanity line" mission. That's not what I want.
I'm not the one who downvoted you, but if I were to hazard a guess, I'd say your were downvoted because when you start off by saying "people like me", it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of "people like me", you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms "less wrong" and "more wrong" to describe elites and peasants of LW, but I'm using them to point out to you the identity you've painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, "Noobs will be noobs, let's give up".
Could I get an explanation for the downvotes?
If by "giving up on trying to be less wrong," you mean I'm never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don't put up straw men; I never said that to be less wrong, you had to do all those things. "less wrong" represents a attitude towards the world, not an endpoint.
Then I do not understand what you mean when you say I am "giving up on trying to be less wrong"
No martyrs allowed.
I don't propose simply disallowing people who havn't read everything from being taken seriously, if they don't say anything stupid. It's fine if you havn't read the sequences and don't care about AI or heavy philosophy stuff, I just don't want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don't confuse "here's a problem that I would like solved" with "I endorse this stupid solution that you don't like".
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are "spouting garbage that's been discussed to death" and aren't being sufficiently punished for it, you could say that instead. If that's not what you are concerned about, then I have failed to comprehend your message.
Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity's main attempt to formalize morality, so I would expect some overlap with FAI.
I don't mind people who haven't read the sequences, as long as they don't start spouting garbage that's already been discussed to death and act all huffy when we tell them so; common failure modes are "Here's an obvious solution to the whole FAI problem!", "Morality all boils down to X", and "You people are a cult, you need to listen to a brave outsider who's willing to go against the herd like me".
If you're interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of "Don't feed the clueless (just downvote them)" (this post suggests widening the sense of "clueless"), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn't have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently.
I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet?
If the goal was -100, then writing should stop now (prediction).
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question.
People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts - including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.)
Trying to explain what is usually simply assumed - to a listener who is at least willing to communicate in good faith - can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place.
There are many conversations where downvoting both sides of a discussion is advisable, yet it isn't conversations with the "Clueless" that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
Startlingly, Googling "Debaters of Perfect Emptiness" turned up no hits. This is not the best of all possible worlds.
I accept your criticism in the spirit it was intended - but I'm not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn't appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion)
Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don't agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a "good thing."
Note that your example is not that pattern, and I haven't responded to Clueless. C is anti-empiricism, but he hasn't shown anything that makes me think that he has anything valuable to contribute to the community - he's 100% confused. So it isn't worth my time to try to persuade him to be less wrong.
I'm stating an expectation of a policy's effectiveness.
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren't many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was--or seeing some comment I upvoted be at -2, and not knowing what I'm missing. So I'd like to ask everyone: if you downvote one comment for being wrong, but think the poster isn't hopeless, please explain your downvote. It's the only way to make the person stop being wrong.
I think your post is troubling in a couple of ways.
First, I think you draw too much of a dichotomy between "read sequences" and "not read sequences". I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that's a pretty large task - especially if you're in school, trying to work a demanding job, etc. I don't wish to speak for you, since you're not clear on the matter, but are people in the process of reading the sequences noise? I'm only in QM, and certainly wasn't there when I started posting, but I've gotten over 1000 karma (all of it on comments or discussion level posts). I'd like to think I've added something to the community.
Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn't want to make all of these points in that thread, since they were off topic to the original. <Warning: Gooey personal details> When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I'd never been part of a forum with a karma system before, and I'd spent enough time on here to know that I really respected the opinions of most people on here. The idea of my ideas being rejected by a community that I'd come to respect was very stressful. I eventually got over it, and as I got more and more karma, it didn't hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again, since when I asked about it I was just downvoted and no one commented. (I'm sure it's just no one happened to see it in recent comments, since it was deep in a thread.) I thought that it was very likely that I would leave the site after that, because it seemed to me that people simply didn't care what I had to say - my comments for about two days were met with wild downvoting and almost no replies, except by one person. But I don't think I am the only person that felt this way when ey joined LessWrong. </Gooey details>
Edit: Hyperlink messed up.
Edit 2: It just now occurred to me to add this, and I've commented enough in this thread for one person, so I'm putting it here: I think all of the meetup posts are much more harmful to the signal to noise ratio than anything else. Unless you're going to them, there's no reason to be interested in them.
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. ("Not as bad as" is a fallacy, but that doesn't mean both things aren't bad.)
edit: and now I'm going to ask why this rated a downvote. What does the downvoter want less of?
edit 2: fair enough, "accepted" is wrong. I meant that it's a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
Get a few more (thousand?) karma and you may find getting karmassassinated doesn't hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable - it happens quite a lot though) and it doesn't bother me all that much.
These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don't like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost!
I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
Edit: Eliminated text to conform to silly new norm. Check out relevant image macro.
It's whimsical, I like it. The purported SEO rationale behind it is completely laughable (really, folks? People are going to judge the degree of phyggishness of LW by googling LW and phyg together, and you're going to stand up and fight that? That's just insane), but it's cute and harmless, so why not adopt it for a few days? Of all reasons to suspect LW of phyggish behavior, this has got to be the least important one. If using the word "phyg" clinches it for someone, I wouldn't take them seriously.
Beats me. And yet I find myself going along with the new norm, just like you.
One of us... One of us...
Well stop it. We should be able to just call a cult a cult.
Dur ? I think you might have quoted the wrong person in your comment above.
Edit: Retracting my comment now that the parent is fixed
Fixed. Stupid clipboard working differently on windows and linux.
To avoid guilt by association?
What you want is an exclusive club. Not a cult or phyg or whatever.
There's only one letter's difference between 'club' and 'phyg'!
And there is only one letter's difference between paid and pain. The meaning of an English word is generally not determined by the letters it contains.
The best way to become more exclusive while not giving the impression of a cult, or by banning people, is by raising your standards and being more technical. As exemplified by all the math communities like the n-Category Café or various computer science blogs (or most of all technical posts of lesswrong).
I can understand people wanting that. If the goal is to spread this information, however, I'd suggest that those wanting to be part of an Inner Circle should go Darknet, invitation only, and keep these discussions there, if you must have them at all.
As someone who has been around here maybe six months and comes everyday, I have yet to drink enough kool aid not to find ridiculous elements to this discussion.
"We are not a Phyg! We are not a Phyg! How dare you use that word?" Could anything possibly make you look more like a Phyg than tabooing the word, and karmabombing people who just mention it? Well, the demand that anyone who shows up should read a million words in blog posts by one individual, and agree with most all of it before speaking does give "We are not a Phyg!" a run for it's money.
Take a step back, and imagine yourself at a new site that had some interesting material, and then coming on a discussion like this. Just what kind of impression would it give you?
Of course, if you just want to talk to the people who you consider have interesting things to say, that's fine and understandable. In fact, I think this discussion serves your purpose well, because it will chase away new folks, and discourage those who haven't been here long from saying much.
Given the current list software, sharing that infrastructure between who want a pure playground and those who want new playmates creates an inevitable conflict. It is possible to have list filtering that is more fine grained, and offers more user control, that mitigates much of the problem. That would be a better solution than a Darknet, but it's work.
Hmm. I generally agree with the original post, but I don't want to be part of an inner circle. I want access to a source of high insight-density information. Whether or not I myself am qualified to post there is an orthogonal issue.
Of course, such a thing would have an extremely high maintenance cost. I have little justification for asking to be given access to it at no personal cost.
Spreading information is important too, but only to the extent that what's being spread is contributing to the collective knowledge.
Which is yet another purpose that involves tradeoffs with the ones I previously mentioned.
I'm puzzled why you think a private email list involves extremely high maintenance costs. Private google group?
A technological solution to the mass of the problem on this list wouldn't seem that hard either. As I've pointed out in other threads, complex message filtering has been around at least since usenet. Much of the technical infrastructure must already be in place, since we have personally customizable filtering based on karma and Friends. Or add another Karma filter for total Karma for the poster, so that you don't even have to enter Friends by hand. Combine Poster Karma with Post Karma with an inclusive OR, and you've probably gone 80% of the way there to being able to filter unwanted noise.
Not infrastructural costs. Social costs (and quite a bit of time, I expect). It takes effort to select contributors and moderate content, especially when those contributors might be smarter than you are. Distinguishing between correct contrarianism and craziness is a hard problem.
The difficulty is in working out who to filter. Dealing with overt trolling is easy. I change my opinions often enough over a long enough period of time that a source of 'information that I agree with' is nearly useless to me.
I think I get it. You want someone/something else to do the filtering for you?
That's easy enough too. If others are willing, instead of being Friended, they could be FilterCloned, and you could filter based on their settings. Let EY be the DefaultFilterClone, or let him and his buddies in the Star Chamber set up a DefaultFilterClone.
I've lurked here for over a year and just started posting in the fan fic threads a month ago. I have read a handful of posts from the sequences and I believe that some of those are changing my life. Sometimes when I start a sequence post I find it uninteresting and I stop. Posts early in the recommended order do this, and that gets in the way every time I try to go through in order. I just can't be bothered because I'm here for leisure and reading uninteresting things isn't leisurely.
I am noise and I am part of the doom of your community. You have my sympathy, and also my unsolicited commentary:
Presently your community is doomed because you don't filter.
Noise will keep increasing until the community you value splinters, scatters, or relocates itself as a whole. A different community will replace it, resembling the community you value just enough to mock you.
If you intentionally segregate based on qualifications your community is doomed anyway.
The qualified will stop contributing to the unqualified sectors, will stop commending potential qualifiers as they approach qualification, and will stop driving out never qualifiers with disapproval. Noise will win as soon as something drives a surge of new interest and the freshest of the freshmen overwhelm the unqualified but initiated.
Within the fortress of qualification things will be okay. They might never feel as good as you think you remember, but when you look through that same lens from further ahead you might recognize a second Golden Age of Whatever. Over time less new blood will be introduced, especially after the shanty town outside the fortress burns to the ground a couple times. People will leave for the reasons people leave. The people left will become more insular and self referential. That will further drive down new blood intake.
Doomed.
What are you going to do about it?
The best steps to take to sustain the community you value in this instance may be different than the best steps to take to build a better instance of the community.
I suspect communities have a natural life cycle and most are doomed. Either they change unrecognisably or they die. This is because the community members themselves change with time and change what they want, and what they want and will put up with from newbies, and so on. (I don't have a fully worked-out theory yet, but I can see the shape of it in my head. I'd be amazed if someone hasn't written it up.)
What this theory suggests: if the forum has a purpose beyond just existence (as this one does), then it needs to reproduce. The Center for Modern Rationality is just the start. Lots of people starting a rationality blog might help, for example. Other ideas?
This is a good idea if and only if we can avoid summoning Azartoth.
You seem to be implying here that LW's purpose is best achieved by some forum continuing to exist in LW's current form.
Yes?
If so, can you expand on your reasons for believing that?
No, that would hold only if one thinks a forum is the best vehicle. It may not even be a suitable one. My if-then does assume a further "if" that a forum is, at the least, an effective vehicle.
(nods) OK, cool.
My working theory is that the original purpose of the OB blog posts that later became LW was to motivate Eliezer to write down a bunch of his ideas (aka "the Sequences") and get people to read them. LW continues to have remnants of that purpose, but less and less so with every passing generation.
Meanwhile, that original purpose has been transferred to the process of writing the book I'm told EY is working on. I'm not sure creating new online discussion forums solves a problem anyone has.
As that purpose gradually becomes attenuated beyond recognition, I expect that the LW forum itself will continue to exist, becoming to a greater and greater extent a site for discussion of HP:MoR, philosophy, cognition, self-help tips, and stuff its users think is cool that they can somehow label "rational." A small group of SI folks will continue to perform desultory maintenance, and perhaps even post on occasion. A small group of users will continue to discuss decision theory here, growing increasingly isolated from the community.
If/when EY gets HP:MoR nominated for a Hugo award, a huge wave of new users will appear, largely representative of science-fiction fandom. The proportion of LW devoted to HP:MoR discussion will double, as will the frequency of public handwringing about the state of the site. The level of discussion will rapidly plummet to standard Internet Geek. LW maintenance will be increasingly seen as a chore by SI folks, to be assigned to interns with nothing better to do. The more academic types will eventually decide it's worth their while to create a new venue for their discussions.
If the book is published and achieves any degree of popularity, there will be a wave of new people joining to talk about the book. This will have all kinds of consequences, but one of them will be a huge increase in the 101-level discussions on LW, as high-school students all over the country decide to share their insights about reality and truth and rationality and philosophy, reminiscent of alt.fan.hofstadter Back in the Day.
And, more precisely
Like NaNoWriMo or thirty things in thirty days (which EY indirectly inspired) - giving the muse an office job. Except, of course, being Eliezer, he made it one a day for two years.
I agree. Low barriers to entry (and utterly generic discussions, like on which movies to watch) seem to have lowered the quality. I often find myself skimming discussions for names I recognize, and just read their comments - ironic, given that once upon a time the anti-kibitzer seemed pressing!
Lest this been seen as unwarranted arrogance: there are many values of p in [0,1] such that I would run a p risk of getting personally banned in return for removing the bottom p of the comments. I often write out a comment and delete it, because I think that, while above the standard of the adjacent comments, it is below what I think the minimal bar should be. Merely saying new, true things about the topic matter is not enough!
The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.
I read them, but engaging in discussion seems difficult. Am I just supposed to pretend all of the interesting comments below don't exist and risk repeating something stupid on the Repeat post? Or should I be trying to get involved in a years-old discussion on the actual article? Sadly, this is something that has a sort of activation energy: if enough people were discussing the sequence repeats, I would discuss them too.
Organizing reading the squence into classes of people (think Metaethics Class of 2012) that commit to reading them and debating them and then answer a quizz about seems more likely to get participation.
I still read them and usually remember to vote them up for MinibearRex bothering to post them, and comment if I have something to say.
[meta] A simple reminder: This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.
In line with the above, please be liberal with explanations as to why you think an opinion should be downvoted. Going through the thread and mass-downvoting every post you disagree with is not helpful. [/meta]
The post came across to me as an explicit call to such, which is rather stronger than "has a high potential".
If anyone does feel motivated to post just bare links to sequence posts, hit one of the Harry Potter threads. These seem to be attracting LW n00bs, some of whom seem actually pretty smart - i.e., the story is working to its intended purpose.
What if users were expected to have a passing familiarity with the topics the sequences covered, but not necessarily to have read them? That way, if they were going to post about one of the topics covered in the sequences, they could be sure to brush up on the state of the debate first.
If you've found some substantially easier way to become reasonably competent -- i.e., possessing a saving throw vs. failing at thinking about thinking -- in a way that doesn't require reading a substantial fraction of the sequences, you're remiss for not describing such a path publicly.
I read A Human's Guide to Words and Reductionism, and a little bit of the rest. I at least feel like I have pretty good familiarity with the rest of the topics covered as a result of having a strong technical background. The path is pretty clear, though perhaps harder to take --- just take college-level classes in mathematics, econ, and physics, and think a lot about the material. And talk to other smart people.
I would guess that hanging out with friends who are aspiring rationalists is a faster way to become rational than reading the sequences.
In any case, it seems pretty clear to me that the sequences do not have a monopoly on rationality. Eliezer isn't the only person in the world who's good at thinking about his thinking.
FWIW, I was thinking along the lines of only requesting passing familiarity with non-core sequences.