Our Phyg Is Not Exclusive Enough
EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg
Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It's nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.
Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don't address the community's best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.
I'm sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don't address that, and no one really calls them out. I could dig up a bunch of quotes but I don't want to single anyone out or make this about any particular point, so I'm leaving it up to your imagination/memory.
It's actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that's actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.
It's worse than just "oh here's some more junk I have to ignore or downvote", because the path of least resistance ends up being "ignore any discussion that contains contradictions of the lesswrong scriptures", which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it's quite hard to tell the difference between that and someone explaining for the millionth time the problem with "but won't the AI know what's right better than humans?". So I just ignore it all and miss a lot of good stuff.
Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I'd like there to be something similar for discussion/comments. Some way of knowing "these people I'm talking to know what they are talking about" without having to dig around in their user history or whatever. I'm not proposing a particular solution here, just saying I'd like there to be more high quality discussion between more properly sequenced LWers.
There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking "damn, I wish we were more like that". A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?
I've recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I've also seen a good deal of complaints about our phygish exclusivity, the politics ban, the "talk to me when you read the sequences" attitude, and so on. I'd just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.
Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (513)
I think you want it more tiered/topic'ed, not more exclusive, which I would certainly support. Unfortunately, the site design is not a priority.
Yeah. Having seperate "elite rationality club" and "casual rationality discussion" areas is probably my prefered solution.
Too bad everyone who cares doesn't care enough to hack the code. How hard would it be for someone to create and send in a patch for something like exclusive back room discussion. It would be just adding another subreddit, no?
/r/evenlesswrong
The approach of seeding another forum or few may help, c.f. the Center for Modern Rationality.
I've seen this suggested before, and while it would have positive aspects, from a PR perspective, it would be an utter nightmare. I've been here for slightly less than a year, after being referred to HPMOR. I am very unlikely (Prior p = 0.02, given that EY started it and I was obsessed with HPMOR, probably closer to p = 0.07) to have ever followed a forum/blog that had an "exclusive members" section. Insomuch as LW is interested in recruiting potential rationalists, this is a horrible, horrible idea.
good point.
Would you prefer that we be a bit more hardcore about subscribing sequences, and a bit more explicit that people should read them? Or maybe we should have little markers next to people's names "this guy has read everything"? Or maybe we should do nothing because all moves are bad? Maybe a more loosely affilated site that has the strict standard instead of just a "non-noobs" section?
In short, no. See my other comment for details. I think the barriers to entry are high enough, and raising them further filters out people we might want.
This introduces status problems, it's impossible (or at least inefficient) to enforce well.
I won't claim that the current design we've located in LessWrongspace is the most optimal, but I'm quite happy with it, and I don't see any way to immediately improve it in the regard you want.
Actually, I'll take that back. I would like to see the community encourage the use of tags a lot more. I think if everyone was very dedicated in trying to use the tagging system it might help the problem you're referring to. But in some way, I think those tags also need to be incorporated into titles of discussion posts. I really like the custom of using [META] or [LINK] in the title, and I'd like to see that expand.
Again no, really for the same reasons as above.
A more realistic idea (what I think the grandparent was suggesting) it just to try to filter off discussions not strictly related to rationality (HPMOR; fiction threads; the AGI/SIAI discussions; etc) into the discussion forum, and to stick stuff strictly related to rationality (or relevant paper-sharing, or whatever) in another subreddit.
The obvious alternative is to create a tier below discussion, which would attract some of the lower-quality discussion posts and thereby improve the signal in the main discussion section.
Or topical discussion boards...
If that happened, unless it was very carefully presented, I would expect a drop of quality in the discussion section because "hey cool down man we aren't in the restricted section here", and because many old-timers might stop spending time there altogether.
I would rather see a solution that didn't include such a neat division; where lower standards were treated as the exception (like in HPMoR threads and open threads), not the rule.
Upvoted.
I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.
There are those who think rationality needs to be learned through osmosis or whatever. That's fine, but I don't want it lowering the quality of discussion here.
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.
An IRC conversation I had a while ago left me with a powerful message: people will give lip service to keeping the gardens, but when it comes time to actually do it, nobody is willing to.
Thanks.
I was going to include something along those lines, but then I didn't. But really, if you haven't read the sequences, and don't care to, the only thing that seperates LW from r/atheism, rationalwiki, whatever that place is you linked to, and so on is that a lot of people here have read the sequences, which isn't a fair reason to hang out here.
This is a pretty hardcore assertion.
I am thinking of lukeprog's and Yvain's stuff as counterexamples.
I think of them (and certain others) as exceptions that prove the rule. If you take away the foundation of the sequences and the small number of awesome people (most of whom, mind you, came here because of Eliezer's sequences), you end up with a place that's indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.
Standard disclaimer edit because I have to: The exceptions don't prove the rule in the sense of providing evidence for the rule (indeed, they are technically evidence contrariwise), but they do allow you to notice it. This is what the phrase really means.
Exceptions don't prove rules.
You are mostly right, which is exactly what I was getting at with the "promoted is the only good stuff" comment.
I do think there is a lot of interesting, useful stuff outside of promoted, tho, it's just mixed with the usual programmer/atheist/transhumanist/etc-level stuff.
Your edit updated me in favour of me being confused about this exception-rule business. Can you link me to something?
-Wikipedia (!!!)
(I should just avoid this phrase from now on, if it's going to cause communication problems.)
Considering how it was subculturally seeded, this should not be surprising. Remember that LW has proceeded in a more or less direct subcultural progression from the Extropians list of the late '90s, with many of the same actual participants.
It's an online community. As such, it's a subculture and it's going to work like one. So you'll see the behaviour of an internet forum, with a bit of the topical stuff on top.
How would you cut down the transhumanist subcultural assumptions in the LW readership?
(If I ever describe LW to people these days it's something like "transhumanists talking philosophy." I believe this is an accurate description.)
Transhumanism isn't the problem. The problem is that when people don't read the sequences, we are no better than any other forum of that community. Too many people are not reading the sequences, and not enough people are calling them out on it.
Where, specifically, do you not see progress? I see much better recognition of, say, regression to the mean here than in the general population, despite it never being covered in the sequences.
This is a very interesting question. I cannot cite a lack of something. But maybe what I'm saying here will be obvious-in-retrospect if I put it like this:
This post is terrible. But some of the comments pointing out its mistakes are great. On the other hand, it's easier to point out the mistakes in other people's posts than to be right yourself. Where are the new posts saying thunderously correct things, rather than mediocre posts with great comments pointing out what's wrong with them?
That terrible post is hardly an example of a newbie problem - it's clearly a one-off by someone who read one post and isn't interested in anything else about the site, but was sufficiently angry to create a login and post.
That is, it's genuine feedback from the outside world. As such, trying really hard to eliminate this sort of post strikes me as something you should be cautious about.
Also, such posts are rare.
Um, after I read the sequences I ploughed through every LW post from the start of LW to late 2010 (when I started reading regularly). What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored. (Though it's probably just me.)
At this point A Group Is Its Own Worst Enemy is apposite. Note the description of the fundamentalist smackdown as a stage communities go through. Note it also usually fails when it turns out the oldtimers have differing and incompatible ideas on what the implicit constitution actually was in the good old days.
tl;dr declarations of fundamentalism heuristically strike me as inherently problematic.
edit: So what about this comment rated a downvote?
edit 2: ah - the link to the Shirky essay appears to be giving the essay in the UK, but Viagra spam in the US o_0 I've put a copy up here.
I thought it was great. Very good link.
It's a revelatory document. I've seen so many online communities, of varying sizes, go through precisely what's described there.
(Mark Dery's Flame Wars (1994) - which I've lost my copy of, annoyingly - has a fair bit of material on similar matters, including one chapter that's a blow-by-blow description of such a crisis on a BBS in the late '80s. This was back when people could still seriously call this stuff "cyberspace." This leads me to suspect the progression is some sort of basic fact of online subcultures. This must have had serious attention from sociologists, considering how rabidly they chase subcultures ...)
LW is an online subcultural group and its problems are those of online subcultural groups; these have been faced by many, many groups in the past, and if you think they're reminiscent of things you've seen happen elsewhere, you're likely right.
Maybe if you reference Evaporative Cooling, which is the converse of the phenomena you describe, you'd get a better reception?
I'm thinking it's because someone appears to have corrupted DNS for Shirky's site for US readers ... I've put up a copy myself here.
I'm not sure it's the same thing as evaporative cooling. At this point I want a clueful sociologist on hand.
Evaporative cooling is change to average belief from old members leaving.
Your article is about change to average belief from new members joining.
Sounds plausibly related, and well spotted ... but it's not obvious to me how they're functionally converses in practice to the degree that you could talk about one in place of talking about the other. This is why I want someone on hand who's thought about it harder than I have.
(And, more appositely, the problem here is specifically a complaint about newbies.)
I wasn't suggesting that one replaced the other, but that one was conceptually useful in thinking about the other.
Definitely useful, yes. I wonder if anyone's sent Shirky the evaporative cooling essay.
Am I in some kind of internet black-hole? That link took me to some viagra spam site.
It's a talk by Clay Shirky, called "A Group Is Its Own Worst Enemy".
I get the essay ... looking in Google, it appears someone's done some scurvy DNS tricks with shirky.com and the Google cache is corrupted too. Eegh.
I've put up a copy here and changed the link in my comment..
shirky.com/writings/group_enemy.html
???
I suspect that's because it's poorly indexed. This should be fixed.
This is very much why I have only read some of it.
If the more recent LW stuff was better indexed, that would be sweet.
Exchanges the look two people give each other when they each hope that the other will do something that they both want done but which neither of them wants to do.
Hey, I think "Dominions" should be played but do want to play it and did purchase the particular object at the end of the link. I don't understand why you linked to it though.
The link text is a quote from the game description.
Ahh, now I see it. Clever description all around!
Yeah, I didn't read it from the wiki index, I read it by going to the end of the chronological list and working forward.
I don't consider myself a particularly patient person when it comes to tolerating ignorance or stupidity but even so I don't much mind if people here contribute without having done much background reading. What matters is that they don't behave like an obnoxious prat about it and are interested in learning things.
I do support enforcing high standards of discussion. People who come here straight from their highschool debate club and Introduction to Philosophy 101 and start throwing around sub-lesswrong-standard rhetoric should be downvoted. Likewise for confident declarations of trivially false things. There should be more correction of errors that would probably be accepted (or even rewarded) in many other contexts. These are the kind of thing that don't actively exclude but do have the side effect of raising the barrier to entry. A necessary sacrifice.
The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
Point taken. There is certainly a lack along those lines.
Isn't the metaethics sequence not liked very much? I haven't read it in a while, and so I'm not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche's moral thought.
Downvoting people for not understanding that beliefs constrain expectation I'm okay with. Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.
Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.
(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don't yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.
After I read it I was like, "Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it's OK that it's not supernatural or 'objective', and we don't have to 'justify' it to an ideal philosophy student of perfect emptyness". Fake utility functions, and Recursive justification stuff helped.
Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I'll put on my project list "reread the metaethics sequence and create an intelligent reply." If that happens, it'll be at least two months out.
I look forward to that.
Try actually applying it to some real life situations and you'll quickly discover the problems with it.
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer's stuff doesn't say much about that topic, but that doesn't mean it fails at it.
I don't think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?
He seems to make a conclusion, but I don't believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague "this isn't enough" feeling?
I can't fully remember - it's been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A 'preferences are subjectively objective' post. A post that explains more completely what he means by 'should' (he has discussed and argued about this in comments).
Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret "belief" in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs "should" be about anticipation, but LessWrong, like everybody else, can't reliably separate descriptive and normative claims, which is exactly why this "beliefs constrain anticipation" thing is misleading. ...There's a neat level-crossing thingy in there.
EY thinking of meta-ethics as a "solved problem" is one of the most obvious signs that he's very spotty when it comes to philosophy and can't really be trusted to do AI theory.
(Apologies if I come across as curmudgeonly.)
Can you give examples of beliefs that aren't about anticipation?
Beliefs about primordial cows, etc. Most people's beliefs. He's talking descriptively, not normatively.
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don't relate to things that leave historical footprints. If you'll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of 'belief' apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren't easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. "communism is good" with "correctly implemented communism is good", or "whites and blacks have equal average IQ" with "whites and blacks would have equal average IQ if they'd had the same cultural privileges/disadvantages". (Apologies for the necessary political examples. Please don't use this as an opportunity to talk about communism or race.)
Many "beliefs" that aren't politically relevant—which excludes most scientific "knowledge" and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like "do I have one hand, two hands, or three hands?" or "how do I get back to my house from my workplace?" aren't generally beliefs so much as knowledge, and in my opinion "knowledge" is not only epistemologically but cognitively-neurologically a more accurate description, though I don't really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn't try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn't meant to describe or solve, even if it's technically possible to do so.
The best illustration I've seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it's usually really bad: proffered explanations are variations on "memetic selection pressures", "confirmation bias", or other fully general "explanations"/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others' beliefs is apparent in posts like "Our Phyg Is Not Exclusive Enough".)
He does? I know he doesn't take it as seriously as other knowledge required for AI but I didn't think he actually thought it was a 'solved problem'.
From my favorite post and comments section on Less Wrong thus far:
Yes, it looks like Eliezer is mistaken there (or speaking hyperbole).
I agree with:
... but would weaken the claim drastically to "Take metaethics, a clearly reducible problem with many technical details to be ironed out". I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label "Deeply Mysterious". This places me approximately equidistant from your respective positions.
I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn't characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls "right"; I still don't know what argument, technical or non-technical, could justify such an intuition, and I don't know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it.
(Mildly inebriated, apologies for errors.)
If I understand you correctly then this particular example I don't think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing.
I suspect I agree with some of your objections to various degrees.
What did he mean by "I tried that..."?
I'm not at all sure, but I think he means CFAI.
Possibly he means this.
How do you measure "progress", exactly ? I'm not sure what the word means in this context.
Yes, this needs clarification. Is it "I like it better/I don't like it better" or something a third party can at least see?
I'm insulted (not in an emotional way! I just want to state my strong personal objection!). Many of us challenge the notion of "progress" being possible or even desirable on topics like Torture vs Specks. And while I've still much to learn, there are people like Konkvistador, who's IMO quite adept at resisting the lure of naive utilitarianism and can put a "small-c conservative" (meaning not ideologically conservative, but technically so) approach to metaethics to good use.
I think the barrier of entry is high enough - the signal-to-noise ratio is high, and if you only read high-karma posts and comments you are guaranteed to get substance.
As for forcing people to read the entire Sequences, I'd say rationalwiki's critique is very appropriate (below). I myself have only read ~20% of the Sequences, and by focusing on the core sequences and highlighted articles, have recognized all the ideas/techniques people refer to in the main-page and discussion posts.
You should try reading the other 80% of the sequences.
As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY's key points used later on identity and decision theory. So most LW readers seem not to have read it.
Suggestion: a straw poll on who's read which sequences.
A poll would be good.
I've read the QM sequence and it really is one of the most important sequences. When I suggest this at meetups and such, people seem to be under the impression that it's just Eliezer going off topic for a while and totally optional. This is not the case, the QM sequence is used like you said to develop a huge number of later things.
The negative comments from physicists and physics students are sort of a worry (to me as someone who got up to the start of studying this stuff in second-year engineering physics and can't remember one dot of it). Perhaps it could do with a robustified rewrite, if anyone sufficiently knowledgeable can be bothered.
The negative comments I've heard give off a strong scent of being highly motivated - in one case an incredible amount of bark bark bark about how awful they were, and when I pressed for details, a pretty pathetic bite. I'd like to get a physicist who didn't seem motivated to have an opinion one way or the other to comment.
It would need to be someone who bought MWI - if the sole problem with them is that they endorse MWI then that's at least academically respectable, and if an expert reading them doesn't buy MWI then they'll be motivated to find problems in a way that won't be as informative as we'd like.
I suspect a lot of it is "oh dear, someone saying 'quantum'" fatigue. But that sounds a plausible approach.
The Quantum Physics Sequence is unusual in that normally, if someone writes 100,000(?) words explaining quantum mechanics for a general audience, they genuinely know the subject first: they have a physics degree, they have had an independent reason to perform a few quantum-mechanical calculations, something like that. It seems to me that Eliezer first got his ideas about quantum mechanics from Penrose's Emperor's New Mind, and then amended his views by adopting many-worlds, which was probably favored among people on the Extropians mailing list in the late 1990s. This would have been supplemented by some incidental study of textbooks, Feynman lectures, expository web pages... but nonetheless, that appears to be the extent of it. The progression from Penrose to Everett would explain why he presents the main interpretive choice as between wavefunction realism with objective collapse, and wavefunction realism with no collapse. His prose is qualitative just about everywhere, indicating that he has studied quantum mechanics just enough to satisfy himself that he has obtained a conceptual understanding, but not to the point of quantitative competence. And then he has undertaken to convey this qualitative conceptual understanding to other people who don't have quantitative competence in the subject, either.
I can recognize all this because I am also an autodidact and I have done comparable things. It's possible to do this and get away with making only a few incidental mistakes. But it is a very risky thing to do. You run a high risk of fooling yourself and then causing your audience to fool themselves too. This is especially the case in mathematical physics. Literally every day I see people asking questions on physics websites that are premised on wrong assumptions about physical theory. I don't mean questions where people say "is it really true that...", I mean questions where the questioner thinks they already understand some topic, and the question proceeds from this incorrect understanding, sometimes quite aggressively in tone (recently observed example).
My opinion about the Sequences is that someone who knows nothing about QM can learn from them, but it's worth getting a second opinion, even just from Wikipedia, since they present a rather ideological point of view. Also, when you read them, you're simply not hearing from someone who has used quantum mechanics professionally; you're hearing from an autodidact who thinks he figured out the gist of the subject - I'd say, very roughly, he gets about 75% of the basics, and the problems are more in what is omitted rather than what is described (e.g. nothing, that I recall, about the role of operators) - and who has decided that one prominent minority faction of opinion among physicists (the many-worlds enthusiasts) are the ones who have correctly discerned the implications of QM for the nature of reality. The fact that he espouses, as the one true interpretation, a point of view that is shared by some genuine physicists, does at least protect him from the accusation of complete folly. Nonetheless, I can tell you - as one autodidact judging another - he's backed the wrong horse. :-)
If you want an independent evaluation of the Sequences by physicists, I suggest that you post this as a question at Physics Stack Exchange. Ask what people think of them, and whether they can be trusted. There's a commenter there, Ron Maimon, who is the most readily available approximation to what you want. Maimon is quantitatively competent in all sorts of advanced physics, and he was once a MWI zealot. Now he's more positivistic, but MWI is still his preferred language for making ontological sense of QM. I would expect him to offer qualified approval of the Sequences, but to make some astute comments regarding content or style.
Since it is a forum where everyone gets a chance to answer the question, with the best replies then being voted up by the readership, of course such a question would also lead to responses by people who don't believe MWI. But this is the quickest way to conduct the experiment you suggest.
Excellent idea - done. Thank you!
You could also ask for an independent evaluation of AI risks here.
That seems less valuable. The QM sequences are largely there to set out what is supposed to be an existing, widespread understanding of QM. No such understanding exists for AI risk.
What reason do I have to believe that this risk isn't even stronger when it comes to AI?
It's not clear how to compare said risk - "quantum" is far more widely abused - but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
Yes.
No, as far as I can tell.
Probably not, then. (The decision theory posts were where I finally hit a tl;dr wall.)
I've read it, but I took away less from it than any of the other sequences. Reading any of the other sequences, I can agree or disagree with the conclusion and articulate why. With the QM sequence, my response is more along the lines of "I can't treat this as very strong evidence of anything because I don't think I'm qualified to tell whether it's correct or not." Eliezer's not a physicist either, although his level of fluency is above mine, and while I consider him a very formidable rationalist as humans go, I'm not sure he really knows enough to draw the conclusions he does with such confidence.
I've seen the QM sequence endorsed by at least one person who is a theoretical physicist, but on the other hand, I've read Mitchell Porter's criticisms of Eliezer's interpretation and they sound comparably plausible given my level of knowledge, so I'm not left thinking I have much more grounds to favor any particular quantum interpretation than when I started.
Something I recall noticing at the time I read said posts is that some of the groundwork you mention didn't necessarily need to be in with the QM. Sure, there are a few points that you can make only by reference to QM but many of the points are not specifically dependent on that part of physics. (ie. Modularization fail!)
That there are no individual particles is something of philosophical import that it'd be difficult to say without bludgeoning the point home, as the possibility is such a strong implicit philosophical assumption and physics having actually delivered the smackdown may be surprising. But yeah, even that could be moved elsewhere with effort. But then again, the sequences are indeed being revised and distilled into publishable rather than blog form ...
Yes, that's the one thing that really relies on it. And the physics smackdown was surprising to me when I read it.
Ideal would seem to be having the QM sequence then later having an identity sequences wherein one post does an "import QM;".
Of course the whole formal 'sequence' notion is something that was invented years later. These are, after all, just a stream of blog posts that some guy spat out extremely rapidly. At that time they were interlinked as something of a DAG, with a bit of clustering involved for some of the bigger subjects.
I actually find the whole 'sequence' focus kind of annoying. In fact I've never read the sequences. What I have read a couple of times is the entire list of blog posts for several years. This includes some of my favorite posts which are stand alone and don't even get a listing in the 'sequences' page.
Totally, there are whole sequences of really good posts that get no mention in the wiki.
Yes! I try to get people to read the "sequences" in ebook form, where they are presented in simple chronological order. And the title is "Eliezer Yudkowsky, blog posts 2006-2010".
I've seen enough of the QM sequence and know enough QM to see that Eliezer stopped learning quantum mechanics before getting to density matrices. As a result, the conclusions he draws from QM rely on metaphysical assumptions and seem rather arbitrary if one knows more quantum mechanics. In the comments to this post Scott Aaronson tries to explain this to Eliezer without much success.
Kahneman and Tversky's Thinking Fast and Slow is basically the sequences + some statistics - AI and metaethics in (shorter) book form (well actually, the other way around, as the book was there first). So perhaps we should say "read the sequences, or that book, or otherwise learn the common mistakes".
The reasoning for downvote on this suggestion is not clear. What does the downvoter actually want less of?
As the suggestion stands, it's at -2. I'm not downvoting it because I don't think it's so bad as to be invisible, but saying that the book is a good substitute for the sequences seems inaccurate enough to downvote. My other comment here contains (slightly) more of an explanation.
Can someone verify this for me? I've heard good things about the authors but my prior for that book containing everything in the (or most of the) sequences is rather low.
I disagree with the grandparent. I read the book a while ago having already read most of the Sequences -- I think that the book gives a fairly good overview of heuristics and biases but doesn't do as good of a job in turning the information into helpful intuitions. I think that the Sequences cover most (but not quite all) of what's covered in the book, while the reverse is not true.
Lukeprog reviewed the book here: his estimate is that it contains about 30% of the Core Sequences.
Your comment describes (or at least intends to describe as per the people disagreeing with you) Judgment under Uncertainty: Heuristics and Biases, not Thinking Fast and Slow.
Strongly disagree; I think there is fairly limited overlap between the two.
Downvoted for linking to that site.
... what?
Stop using that word.
What word?
The only word that shouldn't be used for reasons that extend to not even identifying it. (google makes no use/mention distinction).
You mean the one that shouldn't be associated with us in google's search results?
I'll think about it.
Suggestion: "Our Ult-Cay Is Not Exclusive Enough"
I feel pain just looking at that sentence.
I sure as hell hope self-censorship or encryption for the sake of google results isn't going to become the expected norm here. It's embarassingly silly, and, paradoxically, likely to provide ammunition for anyone who might want to say that we are this thing-that-apparently-must-not-be-named. Wouldn't be overly suprrised if these guys ended up mocking it.
The original title of the post had a nice impact, the point of the rhetorical technique used was to boldly use a negatively connotated word. Now it looks weird and anything but bold.
Also, reading the same rot13-ed word multiple times caused me to learn a small portion of rot13 despite my not wanting to. Annoying.
In fact, edit your post now please Nyan. Apart from that it's an excellent point. "Community", "website" or just about anything else.
"You're a ...." is already used as a fully general counterargument. Don't encourage it!
I want to keep the use of the word, but to hide it from google I have replaced it with it's rot13: phyg
And now we can all relax and have a truly uninhibited debate about whether LW is a phyg. Who would have guessed that rot13 has SEO applications?
Upvoted for agreeing and for reminding me to re-read a certain part of the sequences. I loath fully general counterarguments, especially that one.
That being said, would it be appropriate for you to edit your own comment to remove said word? I don't know (to any signifigant degree) how Google's search algorithms work, but I suspect that having that word in your comment also negatively affects the suggested searches.
Oh, yeah, done.
I agree. Low barriers to entry (and utterly generic discussions, like on which movies to watch) seem to have lowered the quality. I often find myself skimming discussions for names I recognize, and just read their comments - ironic, given that once upon a time the anti-kibitzer seemed pressing!
Lest this been seen as unwarranted arrogance: there are many values of p in [0,1] such that I would run a p risk of getting personally banned in return for removing the bottom p of the comments. I often write out a comment and delete it, because I think that, while above the standard of the adjacent comments, it is below what I think the minimal bar should be. Merely saying new, true things about the topic matter is not enough!
The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.
As someone who is rereading the sequences I think I have a data point as to why. First of all, the "one post a day" is very difficult for me to do. I don't have time to digest a LW post every day, especially if I've got an exam coming up or something. Secondly, I joined the site after the effort started, so I would have had to catch up anyway. Thirdly, ideally I'd like to read at a faster average rate than one per day. But this hasn't happened at all, my rate has actually been rather slower, which is kind of depressing.
I've actually been running a LW sequence liveblog, mostly for my own benefit during the digestive process. See here. I find myself wondering whether others will join me in the liveblogging business sooner or later. I find it a good way to enforce actually thinking about what I am reading.
What I did personally was read through them through relatively quickly. I might not have understood it at the same level of depth but if something is related to something in the sequences then I'll know and know where I can find the information if there's anything I've forgotten.
If anyone does feel motivated to post just bare links to sequence posts, hit one of the Harry Potter threads. These seem to be attracting LW n00bs, some of whom seem actually pretty smart - i.e., the story is working to its intended purpose.
I think your post is troubling in a couple of ways.
First, I think you draw too much of a dichotomy between "read sequences" and "not read sequences". I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that's a pretty large task - especially if you're in school, trying to work a demanding job, etc. I don't wish to speak for you, since you're not clear on the matter, but are people in the process of reading the sequences noise? I'm only in QM, and certainly wasn't there when I started posting, but I've gotten over 1000 karma (all of it on comments or discussion level posts). I'd like to think I've added something to the community.
Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn't want to make all of these points in that thread, since they were off topic to the original. <Warning: Gooey personal details> When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I'd never been part of a forum with a karma system before, and I'd spent enough time on here to know that I really respected the opinions of most people on here. The idea of my ideas being rejected by a community that I'd come to respect was very stressful. I eventually got over it, and as I got more and more karma, it didn't hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again, since when I asked about it I was just downvoted and no one commented. (I'm sure it's just no one happened to see it in recent comments, since it was deep in a thread.) I thought that it was very likely that I would leave the site after that, because it seemed to me that people simply didn't care what I had to say - my comments for about two days were met with wild downvoting and almost no replies, except by one person. But I don't think I am the only person that felt this way when ey joined LessWrong. </Gooey details>
Edit: Hyperlink messed up.
Edit 2: It just now occurred to me to add this, and I've commented enough in this thread for one person, so I'm putting it here: I think all of the meetup posts are much more harmful to the signal to noise ratio than anything else. Unless you're going to them, there's no reason to be interested in them.
It's great that you are reading the sequences. You are right it's not as simple as read them -> not noise, not read them -> noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise.
The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that.
Don't leave, keep reading. When you are done you will know what I'm getting at.
I think it's close to the end, right before/after the fun theory sequence? I've read some of the later posts just from being linked to them, but I'm not sure.
And I quite intentionally avoid talking about things like AI, because I know you're right. I'm not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson's standpoint. <noise> I spent a three hour car drive for fun trying to answer the question "How would I go about making an AI" even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I'd make a discussion post about it, but I figured it was too obvious to not have been thought of before.</noise>
Get a few more (thousand?) karma and you may find getting karmassassinated doesn't hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable - it happens quite a lot though) and it doesn't bother me all that much.
These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don't like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost!
I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
I think if I were karmassassinated again I wouldn't care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn't.
But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted - the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn't so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
I should note that I have never actually been in your shoes. I haven't had any cases where there was unambiguous use of bulk sockpuppets. I've only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time).
That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.)
Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?
It was this thread.
Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they're risk averse and want to make sure they're doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn't like it.
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
It didn't really help that I made my point very poorly.
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. ("Not as bad as" is a fallacy, but that doesn't mean both things aren't bad.)
edit: and now I'm going to ask why this rated a downvote. What does the downvoter want less of?
edit 2: fair enough, "accepted" is wrong. I meant that it's a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
I think it was downvoted because sockpuppetry wasn't really "accepted" by LW, it was just one guy.
I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any productive debate could occur. People who wholeheartedly agree with each other tend not to debate.
Oh, and by the way, there are other strategies for dispelling the public perception of your community being a "phyg", besides using rot13. Not being an ultra-exclusive "phyg" is one of such strategies. If you find yourself turning to rot13 instead, then IMO the battle has already been lost.
I don't see why having the debate at a higher level of knowledge would be a bad thing. Just because everyone is familar with a large bit of useful common knowledge doesn't mean no-one disagrees with it, or that there is nothing left to talk about. There are some LW people who have read everything and bring up interesting critiques.
Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting. Having read the sequences it's the same sort of boring as someone explaining for the millionth time that "no, technological progress or happyness is not a sufficient goal to produce a valuable future, and yes, an AI coded with that goal would kill us all, and it would suck".
The point of my post was that that is not an acceptable solution.
Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:
No, it's not very interesting to you and me, but to the "uneducated folks" whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it's easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
What's your solution, then ? You say,
To me, "more exclusive LW" sounds exactly like the kind of solution that doesn't work, especially coupled with "enforcing a little more strongly that people read the sequences" (in some unspecified yet vaguely menacing way).
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Read map and territory, and understand the way of Bayes.
The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there's no qualitative difference between opinion and fact.
I don't have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don't want a particular type of solution (relaxing LW's phygish standards), doesn't mean I support any particular straw-solution.
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can't tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO.
See, it's exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words.
I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, "rocks tend to fall down" is a fact, but "the Singularity is imminent" is an opinion -- in my opinion -- and so is "we should kick out anyone who hadn't read the entirety of the Sequences".
When you said "we should make LW more exclusive", what did you mean, then ?
In any case, I do have a solution for you: why don't you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own "phyg" and make it as exclusive as you want.
Specifically 'the way of'. Would you have the same objection with 'and understand how bayesian updating works'? (Objection to presumptuousness aside.)
Probably. The same sentiment could be expressed as something like this:
This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
Educating people is like that!
What I would have said about the phrasing is that it is wrong.
Merely telling people that they aren't worthy is not very educational; it's much better to tell them why you think they aren't worthy, which is where the links come in.
Sure, but I have no problem with people being wrong, that's what updating is for :-)
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points - your retort seems odd.
Huh again? You seemed to have missed a level of abstraction.
I mean that I'd like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don't think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I'm not afraid of being a phyg.
That's all, nothing revolutionary.
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn't enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is.
I'm not afraid of it either, I just don't think that power-sliding down a death spiral is a good idea. I don't need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can't filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won't, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don't work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn't tell the person openly to buzz off. But when we speak online, and I ignore someone's comments, you don't see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren't willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people.
I've participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone's head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I've been a part of in this respect, but I have noticed a slow decline in this department.
All of that said, I'm not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn't sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I've been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I'd like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
I agree with pretty much everything you said (except for the sl4 stuff, because I haven't been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions -- outreach as well as technical. I'm not proposing that we set the barrier to entry at zero; I merely think that the guideline, "you must have read and understood all of the Sequences before posting anything" sets the barrier too high.
I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it's possible that I'm biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
(italics mine)
How did you arrive at that idea?
The point isn't to agree with the stuff, but to be familiar with it, with standard arguments that the Sequences establish. If you tried to talk advanced mathematics/philosophy/whatever with people, and didn't know the necessary math/philosophy/whatever, people would tell you some equivalent of "read the sequences".
This is not the rest of the Internet, where everyone is entitled to their opinion and the result is that discussions never get anywhere (in reality, nobody is really interested in anyone's mere opinion, and the result is something like this). If you're posting uninformedly and rehashing old stuff or committing errors the core sequences teach you not to commit, you're producing noise.
This is what i love about LW. There is an actual signal to noise ratio, rather than a sea of mere opinion.
nyan_sandwich said that the Sequences contain not merely arguments, but knowledge. This implies a rather high level of agreement with the material.
I agree, but:
I am perfectly fine with that, as long as they don't just say, "read all of the Sequences and then report back when you're ready", but rather, "your arguments have already been discussed in depth in the following sequence: $url". The first sentence merely dismisses the reader; the second one provides useful material.
Hm, that's a little tricky. I happen to agree that they contain much knowledge - they aren't pure knowledge, there is opinion there, but there is a considerable body of insight and technique useful to a rationalist (that is, useful if you want to be good at arriving at true beliefs or making decisions that achieve your goals). Enough that it makes sense to want debate to continue from that level, rather than from scratch.
However, let's keep our eyes on the ball - that being the true expectation around here. The expectation is emphatically NOT that people should agree with the material in the Sequences. Merely that we don't have to re-hash the basics.
Besides, if you manage to read a sequence, understand it, and still disagree, that means your reply is likely to be interesting and highly upvoted.
Hm. Yeah, I wouldn't want anyone to actually be told "read all the sequences" (and afaik this never happens). It'd be unreasonable to, say, expect people to read the quantum mechanics sequence if they don't intend to discuss QM interpretations. However, problems like what is evidence and how to avoid common reasoning failures are relevant to pretty much everything, so I think an expectation of having read Map and Territory and Mysterious Answers would be useful.
Agreed.
I emphatically agree with you there, as well; but by making this site more "phygvfu", we risk losing this capability.
I agree that these are very useful concepts in general, but I still maintain that it's best to provide the links to these posts in context, as opposed to simply locking out anyone who hadn't read them -- which is what nyan_sandwich seems to be suggesting.
Trouble is, I'm not really sure what nyan_sandwich is suggesting, in specific and concrete terms, over and above already existing norms and practices. "I wish we had higher quality debate" is not a mechanism.
Yesss ... the sequences are great stuff, but they do not reach the level of constituting settled science.
They are quite definitely settled tropes, but that's a different level of thing. Expecting familiarity with them may (or may not) be reasonable; expecting people to treat them as knowledge is rather another thing.
A 'debate club' mindset is one of the things I would try to avoid. Debates emerge when there are new ideas to be expressed and new outlooks or bodies of knowledge to consider - and the supply of such is practically endless. You don't go around trying to artificially encourage an environment of ignorance just so some people are sufficiently uninformed that they will try to argue trivial matters. That's both counterproductive and distasteful.
I would not be at all disappointed if a side effect of maintaining high standards of communication causes us to lose some participants who "come to Less Wrong specifically for the debates". Frankly, that would be among the best things we could hope for. That sort of mindset is outright toxic to conversations and often similarly deleterious to the social atmosphere.
I wasn't suggesting we do that, FWIW.
I think there's a difference between flame wars and informed debate. I'm in favor of the latter, not the former. On the other hand, I'm not a big fan of communities where everyone agrees with everyone else. I acknowledge that they can be useful as support groups, but I don't think that LW is a support group, nor should it become one. Rationality is all about changing one's beliefs, after all...
Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I'm interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I've read most of the core posts of LW, but haven't gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I've had several ideas for posts (in Discussion) that I don't post, but I don't think it meets the community's expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the "raising the sanity line" mission. That's not what I want.
I'm not the one who downvoted you, but if I were to hazard a guess, I'd say your were downvoted because when you start off by saying "people like me", it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of "people like me", you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms "less wrong" and "more wrong" to describe elites and peasants of LW, but I'm using them to point out to you the identity you've painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, "Noobs will be noobs, let's give up".
If by "giving up on trying to be less wrong," you mean I'm never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don't put up straw men; I never said that to be less wrong, you had to do all those things. "less wrong" represents a attitude towards the world, not an endpoint.
Then I do not understand what you mean when you say I am "giving up on trying to be less wrong"
No martyrs allowed.
I don't propose simply disallowing people who havn't read everything from being taken seriously, if they don't say anything stupid. It's fine if you havn't read the sequences and don't care about AI or heavy philosophy stuff, I just don't want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don't confuse "here's a problem that I would like solved" with "I endorse this stupid solution that you don't like".
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity's main attempt to formalize morality, so I would expect some overlap with FAI.
I don't mind people who haven't read the sequences, as long as they don't start spouting garbage that's already been discussed to death and act all huffy when we tell them so; common failure modes are "Here's an obvious solution to the whole FAI problem!", "Morality all boils down to X", and "You people are a cult, you need to listen to a brave outsider who's willing to go against the herd like me".
Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.
Maybe we should charge thousands of dollars for access to the sequences as well? And hire some lawyers...
More seriously, I wonder what people's reaction would be to a newbie section that wouldn't be as harsh as the now-much-harsher normal discussion. This seems to go over well on the rest of the internet.
Sort of like raising the price and then having a sale...
Sounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher 'levels' considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about 'free will' with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won't be able to.
For this to be implemented, we'd need a user rights system with the respective discussion sections as well as a way to determine the 'level' of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don't give any feedback about the scoring other than 'you leveled up!' and 'Your score wasn't good enough, re-read these sequences:__ and try again later.'
And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.
This sounds like a good idea, but I think it might be too difficult to implement in practice, as determined users will bend their efforts toward guessing the password in order to gain access to the coveted Inner Circle. This isn't a problem for that other phyg, because their access is gated by money, not understanding.
I think the freemasons have this one solved for us: instead of a passwords, we use interview systems, where people of the level above have to agree that you are ready before you are invited to the next level. Likewise, we make it known that helpful input on the lower levels is one of the prerequisites to gaining a higher level- we incentivise constructive input on the lower tiers, and effectively gate access to the higher tiers.
Why does this solution need to be so global ? Why don't we simply allow users to blacklist/whitelist other users as they see fit, on an individual basis ? This way, if someone wants to form an ultra-elite cabal, they can do that without disturbing the rest of the site for anyone else.
You're not proposing a different system, you're just proposing additional qualifiers.
Depending on other factors, it could also resemble a school system.
[meta] A simple reminder: This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.
In line with the above, please be liberal with explanations as to why you think an opinion should be downvoted. Going through the thread and mass-downvoting every post you disagree with is not helpful. [/meta]
The post came across to me as an explicit call to such, which is rather stronger than "has a high potential".
I can understand people wanting that. If the goal is to spread this information, however, I'd suggest that those wanting to be part of an Inner Circle should go Darknet, invitation only, and keep these discussions there, if you must have them at all.
As someone who has been around here maybe six months and comes everyday, I have yet to drink enough kool aid not to find ridiculous elements to this discussion.
"We are not a Phyg! We are not a Phyg! How dare you use that word?" Could anything possibly make you look more like a Phyg than tabooing the word, and karmabombing people who just mention it? Well, the demand that anyone who shows up should read a million words in blog posts by one individual, and agree with most all of it before speaking does give "We are not a Phyg!" a run for it's money.
Take a step back, and imagine yourself at a new site that had some interesting material, and then coming on a discussion like this. Just what kind of impression would it give you?
Of course, if you just want to talk to the people who you consider have interesting things to say, that's fine and understandable. In fact, I think this discussion serves your purpose well, because it will chase away new folks, and discourage those who haven't been here long from saying much.
Given the current list software, sharing that infrastructure between who want a pure playground and those who want new playmates creates an inevitable conflict. It is possible to have list filtering that is more fine grained, and offers more user control, that mitigates much of the problem. That would be a better solution than a Darknet, but it's work.
I'm amused by the framing as a hypothetical. I'm far from being an old-timer, but I've been around for a while, and when I was new to this site a discussion like this was going on. I suspect the same is true for many of us. This particular discussion comes around on the gittar like clockwork.
What impression did it leave you?
In my case it left the impression that (a) this was an Internet forum like any other I've been on in the past seventeen years (b) like all of them, it behaved as though its problems were unique and special, rather than a completely generic phenomenon. So, pretty much as normal then.
BTW, to read the sequences is not to agree with every word of them, and when I read all the rest of the posts chronologically from 2009-2011 the main thing I got from it was the social lay of the land.
(My sociology is strictly amateur, though an ongoing personal interest.)
My $0.02 (apologies if it's already been said; I haven't read all the comments): wanting to do Internet-based outreach and get new people participating is kind of at odds with wanting to create an specialized advanced-topics forum where we're not constantly rehashing introductory topics. They're both fine goals, but trying to do both at once doesn't work well.
LW as it is currently set up seems better optimized for outreach than for being an advanced-topics forum. At the same time, LW doesn't want to devolve to the least common denominator of the Internet. This creates tension. I'm about .6 confident that tension is intentional.
Of course, nothing stops any of us from creating invitation-only fora to which only the folks whose contributions we enjoy are invited. To be honest, I've always assumed that there exist a variety of more LW-spinoff private forums where the folks who have more specialized/advanced groundings get to interact without being bothered by the rest of us.
Somewhat relatedly, one feature I miss from the bad old usenet days is kill files. I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here. Concealing sufficiently downvoted comments is similar in principle, but not reliable in practice.
You're suggesting a strategy of tension?
Aw. And they didn't invite nyan_sandwich. That's so sad.
He or she should get together with other people who haven't been invited to Even Less Wrong and form their own. Then one day they can get together with Even Less Wrong like some NFL/AFL merger, only with more power to save the world.
There would have to be a semaphore or something, somewhere. So these secret groups can let each other know they exist without tipping off the newbs.
I've lurked here for over a year and just started posting in the fan fic threads a month ago. I have read a handful of posts from the sequences and I believe that some of those are changing my life. Sometimes when I start a sequence post I find it uninteresting and I stop. Posts early in the recommended order do this, and that gets in the way every time I try to go through in order. I just can't be bothered because I'm here for leisure and reading uninteresting things isn't leisurely.
I am noise and I am part of the doom of your community. You have my sympathy, and also my unsolicited commentary:
Presently your community is doomed because you don't filter.
Noise will keep increasing until the community you value splinters, scatters, or relocates itself as a whole. A different community will replace it, resembling the community you value just enough to mock you.
If you intentionally segregate based on qualifications your community is doomed anyway.
The qualified will stop contributing to the unqualified sectors, will stop commending potential qualifiers as they approach qualification, and will stop driving out never qualifiers with disapproval. Noise will win as soon as something drives a surge of new interest and the freshest of the freshmen overwhelm the unqualified but initiated.
Within the fortress of qualification things will be okay. They might never feel as good as you think you remember, but when you look through that same lens from further ahead you might recognize a second Golden Age of Whatever. Over time less new blood will be introduced, especially after the shanty town outside the fortress burns to the ground a couple times. People will leave for the reasons people leave. The people left will become more insular and self referential. That will further drive down new blood intake.
Doomed.
What are you going to do about it?
The best steps to take to sustain the community you value in this instance may be different than the best steps to take to build a better instance of the community.
I haven't read most of the sequences yet and agree with most of what those lw members are saying of who you'd like to see more of.
Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Douglas Hofstadter. Quite a few of them have read all of the sequences too.
Here is an example from yesterday. I told an AI researcher about a comment made on lw (don't worry possible negative influence, they are already well aware of everything and has read the sequences). Here is part of the reply:
...
I would usually rephrase this at some point and post it as a reply.
And this is just one of many people who simply don't bother to get into incredible exhausting debates with a lesswrong mob.
Without me your impression that everyone agrees with you would be even worse. And by making this community even more exclusive you will get even more out of touch with reality.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven't even read your scriptures. Guess again!
The best way to become more exclusive while not giving the impression of a cult, or by banning people, is by raising your standards and being more technical. As exemplified by all the math communities like the n-Category Café or various computer science blogs (or most of all technical posts of lesswrong).