EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg

Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It's nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.

Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don't address the community's best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.

I'm sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don't address that, and no one really calls them out. I could dig up a bunch of quotes but I don't want to single anyone out or make this about any particular point, so I'm leaving it up to your imagination/memory.

It's actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that's actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.

It's worse than just "oh here's some more junk I have to ignore or downvote", because the path of least resistance ends up being "ignore any discussion that contains contradictions of the lesswrong scriptures", which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it's quite hard to tell the difference between that and someone explaining for the millionth time the problem with "but won't the AI know what's right better than humans?". So I just ignore it all and miss a lot of good stuff.

Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I'd like there to be something similar for discussion/comments. Some way of knowing "these people I'm talking to know what they are talking about" without having to dig around in their user history or whatever. I'm not proposing a particular solution here, just saying I'd like there to be more high quality discussion between more properly sequenced LWers.

There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking "damn, I wish we were more like that". A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?

I've recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I've also seen a good deal of complaints about our phygish exclusivity, the politics ban, the "talk to me when you read the sequences" attitude, and so on. I'd just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.

Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.

Our Phyg Is Not Exclusive Enough
New Comment
518 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've lurked here for over a year and just started posting in the fan fic threads a month ago. I have read a handful of posts from the sequences and I believe that some of those are changing my life. Sometimes when I start a sequence post I find it uninteresting and I stop. Posts early in the recommended order do this, and that gets in the way every time I try to go through in order. I just can't be bothered because I'm here for leisure and reading uninteresting things isn't leisurely.

I am noise and I am part of the doom of your community. You have my sympathy, and also my unsolicited commentary:

Presently your community is doomed because you don't filter.

Noise will keep increasing until the community you value splinters, scatters, or relocates itself as a whole. A different community will replace it, resembling the community you value just enough to mock you.

If you intentionally segregate based on qualifications your community is doomed anyway.

The qualified will stop contributing to the unqualified sectors, will stop commending potential qualifiers as they approach qualification, and will stop driving out never qualifiers with disapproval. Noise will win as soon as something ... (read more)

I suspect communities have a natural life cycle and most are doomed. Either they change unrecognisably or they die. This is because the community members themselves change with time and change what they want, and what they want and will put up with from newbies, and so on. (I don't have a fully worked-out theory yet, but I can see the shape of it in my head. I'd be amazed if someone hasn't written it up.)

What this theory suggests: if the forum has a purpose beyond just existence (as this one does), then it needs to reproduce. The Center for Modern Rationality is just the start. Lots of people starting a rationality blog might help, for example. Other ideas?

4Armok_GoB
This is a good idea if and only if we can avoid summoning Azartoth.
3TheOtherDave
You seem to be implying here that LW's purpose is best achieved by some forum continuing to exist in LW's current form. Yes? If so, can you expand on your reasons for believing that?
2David_Gerard
No, that would hold only if one thinks a forum is the best vehicle. It may not even be a suitable one. My if-then does assume a further "if" that a forum is, at the least, an effective vehicle.

(nods) OK, cool.

My working theory is that the original purpose of the OB blog posts that later became LW was to motivate Eliezer to write down a bunch of his ideas (aka "the Sequences") and get people to read them. LW continues to have remnants of that purpose, but less and less so with every passing generation.

Meanwhile, that original purpose has been transferred to the process of writing the book I'm told EY is working on. I'm not sure creating new online discussion forums solves a problem anyone has.

As that purpose gradually becomes attenuated beyond recognition, I expect that the LW forum itself will continue to exist, becoming to a greater and greater extent a site for discussion of HP:MoR, philosophy, cognition, self-help tips, and stuff its users think is cool that they can somehow label "rational." A small group of SI folks will continue to perform desultory maintenance, and perhaps even post on occasion. A small group of users will continue to discuss decision theory here, growing increasingly isolated from the community.

If/when EY gets HP:MoR nominated for a Hugo award, a huge wave of new users will appear, largely representative of science-fictio... (read more)

3wedrifid
And, more precisely
5David_Gerard
Like NaNoWriMo or thirty things in thirty days (which EY indirectly inspired) - giving the muse an office job. Except, of course, being Eliezer, he made it one a day for two years.
0FourFire
I'm responding to congratulate you on your correct prediction. I see this account hasn't been active in over four years.

If anyone does feel motivated to post just bare links to sequence posts, hit one of the Harry Potter threads. These seem to be attracting LW n00bs, some of whom seem actually pretty smart - i.e., the story is working to its intended purpose.

Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.

I can understand people wanting that. If the goal is to spread this information, however, I'd suggest that those wanting to be part of an Inner Circle should go Darknet, invitation only, and keep these discussions there, if you must have them at all.

As someone who has been around here maybe six months and comes everyday, I have yet to drink enough kool aid not to find ridiculous elements to this discussion.

"We are not a Phyg! We are not a Phyg! How dare you use that word?" Could anything possibly make you look more like a Phyg than tabooing the word, and karmabombing people who just mention it? Well, the demand that anyone who shows up should read a million words in blog posts by one individual, and agree with most all of it before speaking does give "We are not... (read more)

imagine yourself at a new site that had some interesting material, and then coming on a discussion like this.

I'm amused by the framing as a hypothetical. I'm far from being an old-timer, but I've been around for a while, and when I was new to this site a discussion like this was going on. I suspect the same is true for many of us. This particular discussion comes around on the gittar like clockwork.

4buybuydandavis
What impression did it leave you?

In my case it left the impression that (a) this was an Internet forum like any other I've been on in the past seventeen years (b) like all of them, it behaved as though its problems were unique and special, rather than a completely generic phenomenon. So, pretty much as normal then.

BTW, to read the sequences is not to agree with every word of them, and when I read all the rest of the posts chronologically from 2009-2011 the main thing I got from it was the social lay of the land.

(My sociology is strictly amateur, though an ongoing personal interest.)

9buybuydandavis
This is hardly my first rodeo, but this place is unlike any others I've been on for exactly the point at issues here - the existence of a huge corpus written overwhelmingly by one list member that people are expected to read before posting and relate their posts to. The closest I've come to such attitudes were on two lists; one Objectivist, one Anarchist. On the Objectivist list, where there was a little bit of "that was all answered in this book/lecture from Rand", people were not at all expected to have read the entire corpus before participating. Rand herself was not participating on the list, so there is another difference. The Anarchist list was basically the list of an internet personality who was making a commercial venture of it, so he controlled the terms of the debate as suited his purposes, and tabooed issues he considered settled. Once that was clear to me, I left the site, considering it too phygish. I'd imagine that there are numerous religious sites with the same kind of reading/relating requirements, but only a limited number of those where the author of the corpus was a member of the list.

To LW's credit, "read the sequences" as a counterargument seems increasingly rare these days. I've seen it once in the last week or two, but considering that we're now dealing with an unusually large number of what I'll politely describe as contrarian newcomers, I'll still count that as a win.

In any case, I don't get the sense that this is an unknown issue. Calls for good introductory material come up fairly often, so clearly someone out there wants a better alternative to pointing newcomers at a half-million words of highly variable material and hoping for the best -- but even if successful, I suspect that'll be of limited value. The length of the corpus might contribute to accusations of phygism, but it's not what worries me about LW. Neither is the norm of relating posts to the Sequences.

This does give me pause, though: LW deals politely with intelligent criticism, but it rarely internalizes it. To the best of my recollection none of the major points of the Sequences have been repudiated, although in a work of that length we should expect some to have turned out to be demonstrably wrong; no one bats a thousand. A few seem to have slipped out of the de-facto canon... (read more)

What can we do about this?

Reply not with "read the sequences", but with "This is covered in [link to post], which is part of [link to sequence]." ? Use one of the n00b-infested Harry Potter threads, with plenty of wrong but not hopeless reasoning, as target practice.

6buybuydandavis
I think that you've got a bigger problem than internalizing repudiations. The demand for repudiations is the mistake Critical Rationalists make - "show me where I'm wrong" is not a sufficiently open mind. First, the problem might be that you're not even wrong. You can't refute something that's not even wrong. When someone is not even wrong, he has to be willing to justify his ideas, or you can't make progress. You can lead a horse to water, but you can't make him think. (As an aside, is there an article about Not Even Wrong here? I don't remember one, and it is an important idea to which a lot are probably already familiar. Goes well with the list name, too.) Second, if one is only open to repudiations, one is not open to fundamentally different conceptualizations on the issue. The mapping from one conceptualization to another can be a tedious and unproductive exercise, if even possible in practical terms. I've spent years on a mailing list about Stirner - likely The mailing list on Stirner. In my opinion, Stirner has the best take on metaethics, and even if you don't agree, there are a number of issues he brings up better than others. A lot of smart folks on that list, and we made some limited original progress. Stirner is near the top of the list for things I know better than others. People who would know better, are likely people I already know in a limited fashion. I thought to write an article from that perspective, contrasting that with points in the Metaethics sequence. But I don't think the argument in the Metaethics sequence really follows, and contemplating an exegesis of it to "repudiate" it fills me with a vast ennui. So, it's Bah Humbug, and I don't contribute. Whatever you might think of me, setting up impediments to people sharing what they know best is probably not in the interest of the list. There's enough natural impediment to posting an article in a group; always easier to snipe at others than put your own ideas up for target practice. Ther
1Nornagest
Not that I know of, although it's referenced all over the place -- like Paul Graham's paper on identity, it seems to be an external part of the LW canon. The Wikipedia page on "Not Even Wrong" does appear in XiXiDu's list of external resources -- a post that's faded into undeserved obscurity, I think. As to your broader point, I agree that "show me where I'm wrong" is suboptimal with regard to establishing a genuinely open system of ideas. It's also a good first step, though, and so I'd view a failure to internalize repudiation as a red flag of the same species as what you seem to be pointing to -- a bigger one, in fact. Not sufficient, but necessary.
0buybuydandavis
Certainly if you have been repudiated, but fail to internalize the repudiation, you've got a big red flag. But that's why I think's it less dangerous and debilitating - it's clear, obvious, and visible. I consider only listening to repudiations as the bigger problem: it is being willfully deaf and non responsive to potential improvement. It's not failing to understand, it's refusing to listen.
2khafra
In that case, Lukeprog's metaethics sequence must have been of great comfort to you, since he didn't really spend much time on Eliezer's metaethics sequence. Perhaps you could just start covering Stimer's material in a discussion post or two and see what happens.
4[anonymous]
Just curious, was the anarchist Fgrsna Zbylarhk?
3buybuydandavis
DIng! Ding! Ding! We have a winner! Yeah, that's the one. I don't begrudge a guy trying to make a buck, or wanting to push his agenda. I find him a bright guy with a lot of interesting things to say. And I'll still listen to his youtube videos. But his agenda conflicts with mine, and I don't want to spend energy discussing issues in a community where one isn't allowed to publicly argue against some dogma in philosophy. That which can be destroyed by the truth should be.
2[anonymous]
Oooh, what's my prize? Yup, I pretty much agree with your assessment. It was quite the interesting rabbit hole to go down. But at least for me, it became anti-productive and unhealthy. I found much better uses of my time.
-8Alsadius
2David_Gerard
That's an important difference, but I don't think it's one for the social issues being raised in this post or this thread, which are issues of community interaction - and I think so because it's the same issues covered in A Group Is Its Own Worst Enemy. This post is precisely the call for a wizard smackdown.
3TheOtherDave
I was going to say essentially this, but the other David did it for me.
2buybuydandavis
I'm sure. What I wonder is how much the sequences even represent a consensus of the original list members involved in the discussion. In my estimation, it varies a lot. In particular, I doubt EY carried the day with even a strong plurality with both his conclusions and argument in the metaethics sequence.
2wedrifid
I doubt even Eliezer_2012 would agree with all of them. They were a rather rapidly produced bunch of blog posts and very few people would maintain consistent endorsement of past blogging output.
3[anonymous]
Hmm. I generally agree with the original post, but I don't want to be part of an inner circle. I want access to a source of high insight-density information. Whether or not I myself am qualified to post there is an orthogonal issue. Of course, such a thing would have an extremely high maintenance cost. I have little justification for asking to be given access to it at no personal cost. Spreading information is important too, but only to the extent that what's being spread is contributing to the collective knowledge.
-2buybuydandavis
Which is yet another purpose that involves tradeoffs with the ones I previously mentioned. I'm puzzled why you think a private email list involves extremely high maintenance costs. Private google group? A technological solution to the mass of the problem on this list wouldn't seem that hard either. As I've pointed out in other threads, complex message filtering has been around at least since usenet. Much of the technical infrastructure must already be in place, since we have personally customizable filtering based on karma and Friends. Or add another Karma filter for total Karma for the poster, so that you don't even have to enter Friends by hand. Combine Poster Karma with Post Karma with an inclusive OR, and you've probably gone 80% of the way there to being able to filter unwanted noise.
3[anonymous]
Not infrastructural costs. Social costs (and quite a bit of time, I expect). It takes effort to select contributors and moderate content, especially when those contributors might be smarter than you are. Distinguishing between correct contrarianism and craziness is a hard problem. The difficulty is in working out who to filter. Dealing with overt trolling is easy. I change my opinions often enough over a long enough period of time that a source of 'information that I agree with' is nearly useless to me.
2buybuydandavis
I think I get it. You want someone/something else to do the filtering for you? That's easy enough too. If others are willing, instead of being Friended, they could be FilterCloned, and you could filter based on their settings. Let EY be the DefaultFilterClone, or let him and his buddies in the Star Chamber set up a DefaultFilterClone.
0[anonymous]
Not exactly 'want'. The nature of insights is that they are unexpected. But essentially yes.
[-]brilee250

[meta] A simple reminder: This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.

In line with the above, please be liberal with explanations as to why you think an opinion should be downvoted. Going through the thread and mass-downvoting every post you disagree with is not helpful. [/meta]

This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.

The post came across to me as an explicit call to such, which is rather stronger than "has a high potential".

[-]Larks250

I agree. Low barriers to entry (and utterly generic discussions, like on which movies to watch) seem to have lowered the quality. I often find myself skimming discussions for names I recognize, and just read their comments - ironic, given that once upon a time the anti-kibitzer seemed pressing!

Lest this been seen as unwarranted arrogance: there are many values of p in [0,1] such that I would run a p risk of getting personally banned in return for removing the bottom p of the comments. I often write out a comment and delete it, because I think that, while above the standard of the adjacent comments, it is below what I think the minimal bar should be. Merely saying new, true things about the topic matter is not enough!

The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.

The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.

As someone who is rereading the sequences I think I have a data point as to why. First of all, the "one post a day" is very difficult for me to do. I don't have time to digest a LW post every day, especially if I've got an exam coming up or something. Secondly, I joined the site after the effort started, so I would have had to catch up anyway. Thirdly, ideally I'd like to read at a faster average rate than one per day. But this hasn't happened at all, my rate has actually been rather slower, which is kind of depressing.

9hesperidia
I've actually been running a LW sequence liveblog, mostly for my own benefit during the digestive process. See here. I find myself wondering whether others will join me in the liveblogging business sooner or later. I find it a good way to enforce actually thinking about what I am reading.
1EStokes
What I did personally was read through them through relatively quickly. I might not have understood it at the same level of depth but if something is related to something in the sequences then I'll know and know where I can find the information if there's anything I've forgotten.
7atorm
I read them, but engaging in discussion seems difficult. Am I just supposed to pretend all of the interesting comments below don't exist and risk repeating something stupid on the Repeat post? Or should I be trying to get involved in a years-old discussion on the actual article? Sadly, this is something that has a sort of activation energy: if enough people were discussing the sequence repeats, I would discuss them too.
6Viliam_Bur
Perhaps we could save users one click by putting the summary of the article on the top of the main page with links "read the article" and "discuss the article" below. Sometimes saving users one click increases the traffic significantly.
2[anonymous]
Organizing reading the squence into classes of people (think Metaethics Class of 2012) that commit to reading them and debating them and then answer a quizz about seems more likely to get participation.
0David_Gerard
I still read them and usually remember to vote them up for MinibearRex bothering to post them, and comment if I have something to say.
[-][anonymous]210

Edit: Eliminated text to conform to silly new norm. Check out relevant image macro.

It's whimsical, I like it. The purported SEO rationale behind it is completely laughable (really, folks? People are going to judge the degree of phyggishness of LW by googling LW and phyg together, and you're going to stand up and fight that? That's just insane), but it's cute and harmless, so why not adopt it for a few days? Of all reasons to suspect LW of phyggish behavior, this has got to be the least important one. If using the word "phyg" clinches it for someone, I wouldn't take them seriously.

6John_Maxwell
To avoid guilt by association?
5Bugmaster
Beats me. And yet I find myself going along with the new norm, just like you. One of us... One of us...
7Eugine_Nier
Well stop it. We should be able to just call a cult a cult.
0Bugmaster
Dur ? I think you might have quoted the wrong person in your comment above. Edit: Retracting my comment now that the parent is fixed
3Eugine_Nier
Fixed. Stupid clipboard working differently on windows and linux.

Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step. Minus the discussion about definitions of the word phyg, what exactly are talking about?

When a tree falls down in a LessWrong forest, why there is a "sound":

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

Because people on LW pretend they know some things better that everyone else, and that's an open challenge that someone should go and kick their butts, preferably literally. Only strong or popular people are ... (read more)

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

No, that isn't it. LW isn't at all special in that respect - a huge number of specialized communities exist on the net which talk about "crazy stuff", but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that's not really warranted.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

No, that isn't it. Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird. The Khan academy is challenging the scho... (read more)

It's not the Googleability of "phyg". One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong's potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I'm not sure this succeeded in reassuring them.)

But, y'know, if you don't want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.

Edit: Technically just sabotage, not "terrorism" per se. Not that that would assuage qualms non-negligibly.

On your last point, I have to cite our all-*cough*-wise Professor Quirrell

"Such dangers," said Professor Quirrell coldly, "are to be discussed in offices like this one, not in speeches. The fools […] are not interested in complications and caution. Present them with anything more nuanced than a rousing cheer, and you will face your war alone.

5[anonymous]
Nevermind that there were no actual plans for destroying fabs, and that the whole "terrorist plot" seems to be a collective hallucination. Nevermind that the author in question has exhaustively argued that terrorism is ineffective.

Yeah, but he didn't do it right there in that essay. And saying "AI is dangerous, stopping Moore's Law might help, here's how fragile semiconductor manufacture is, just saying" still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.

You're pointing out it doesn't technically say that, but multiple people coming to that essay have taken it that way. You can say "ha! They're wrong", but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.

1[anonymous]
Yes, I accept that it's a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I'm not saying anything like "Ha! They're wrong!" I'm lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.

"Just saying" is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn't intend lunatics to kill them - if you say "we were just saying", the courts say "no you really weren't."

We don't have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko's Forbidden Thread), but "just saying" in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.

0[anonymous]
As far as I can tell, "just saying" is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don't recall saying anything about disclaimers, either. So what are you really trying to say here?

It's a name for the style of argument: that it's not advocating people do these things, it's just saying that uFAI is a problem, slowing Moore's Law might help and by the way here's the vulnerabilities of Intel's setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.

Note that I am not intending to claim that the implication was Gwern's original intention (as I note way up there, I don't think it is); I'm saying it's a property of the text as rendered. And that me saying it's a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they're adding 2 and 2 and getting 666.

0[anonymous]
It's completely orthogonal to the point that I'm making. If somebody reads something and comes to a strange conclusion, there's got to be some sort of five-second level trigger that stops them and says, "Wait, is this really what they're saying?" The responses to the essay made it evident that there's a lot of people that failed to have that reaction in that case. That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.
4David_Gerard
I'd have wondered about it myself if I hadn't had prior evidence that Gwern wasn't a crazy person, so I'm not convinced that it's as obviously surface-innocuous as you feel it is. Perhaps I've been biased by hearing crazy-nerd stories (and actually going looking for them, 'cos I find them interesting). And I do think the PR disaster potential was something I would class as obvious, even if terrorist threats from web forum postings are statistically bogeyman stories. I suspect we've reached the talking past each other stage.
7TheOtherDave
I understood "just saying" as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience's inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn't something you can take for granted.
5Nick_Tarleton
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you're going to reason about impressions (I'm not saying that you should, it's very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
0[anonymous]
I'm not making excuses. Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what's the point in rehashing all of that?
2khafra
I agree that it's not fair to blame LW posters for the problem. However, I can't think of any route to patching the problem that doesn't involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
2Viliam_Bur
Anyway, we shouldn't make it too easy for people to get the false conlusion, and we should err on side of caution. Having said this, I join your lamentations.
4jacoblyles
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages. Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!") LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
0gwern
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
0jacoblyles
Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior. However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution. One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
9gwern
I don't know how you could read LW and not realize that we certainly do accept precautionary principles ("running on corrupted hardware" has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal's mugging in the last week, neither of which says 'you should just bite the bullet'!), and libertarianism is heavily overrepresented compared to the general population. No, one of the 'big black marks' on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There's nothing particular to SIAI/LW there.
3jacoblyles
It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob. The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he's not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake. So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean. I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with. And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by "shut up and calculate", which says trust your arithmetic utilitarian calculus and not your ugh fields.
6TheOtherDave
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization. The question is whether we believe L3, and whether we ought to believe L3. Many of us don't seem to believe this. Do you believe it? If so, why?
8fubarobfusco
I don't expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don't tend to accomplish their goals; but also, in those cases where a terrorist group's stated goal is achieved or becomes obsolete, they don't dissolve and say "our work is done" — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members' psychological needs. "although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature." — Max Abrahms, "Why Terrorism Does Not Work" "The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists." — Max Abrahms, "What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy". Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won't do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.) As such, L3 is false: terrorism predictably wouldn't work.
8gwern
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea... if you are an idiot who can't spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence. Apparently they are just convinced that utilitarians must be stupid or ignorant. Well! I guess that settles everything.

There's a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.

I got the idea that XiXiDu was going crazy because he didn't see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn't want to do the former just because a philosophical argument told him to and couldn't quite manage the latter.

If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren't able to dismiss as obviously flawed extremely quickly.

1gwern
XiXi was around for a while before he began 'freaking out'.
3TheOtherDave
I think what Risto meant was "an argument for terrorism they weren't able to (dismiss as obviously flawed extremely quickly)", not "people with this behavior pattern would probably freak out (..) extremely quickly". How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
2TheOtherDave
(nods) I do have some sympathy for how easy it is to go from "I endorse X based on Y, and you don't believe Y" to "You reject X." But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there's not much else to say.
3TheOtherDave
Yup, I agree with all of this. I'm curious about jacoblyles' beliefs on the matter, though. More specifically, I'm trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
5gwern
'Pretty sure', eh? Would you care to take a bet on this? I'd be happy to go with a few sorts of bets, ranging from "an organization that used to be SIAI or CFAR is put on the 'Individuals and Entities Designated by the State Department Under E.O. 13224' or 'US Department of State Terrorist Designation Lists' within 30 years" to ">=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years" etc. I'd be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give. If you're worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I'd have to reduce my bet substantially).

Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match... and then those which don't.

My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only "weird" is pretty normal. Being "weird + thorough", for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.

The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying "yeah, we read sci-fi about parallel universes, and we know it's not real, ha-ha silly us" and saying "actually, our intepretation of quantum physics is right, and you are wrong, that's the fact, no excuses". This is the part that makes people angry. You are allowed to take the position of authority only if you are... (read more)

-4Pentashagon
If the phyg-meme gets really bad we can just rename the site "lessharmful.com".
7gwern
Seriously?
4Anatoly_Vorobey
Which part of my comment are you incredulous about?
[-]gwern200

That nobody sees self-help books as weird or cultlike.

0John_Maxwell
redacted
0whowhowho
That is one of the central fallacies of LW. The Sequnces generally don't settle issues in a step-by-step way. They are made up of postings, each of which is followed by a discussion often containing a lot of "I don't see what you mean" and "I think that is wrong because". The stepwise model may be attractive, but that doesn't make it feasible. Science isn't that linear, and most of the topics dealt with are philosophy...nuff said.

I think your post is troubling in a couple of ways.

First, I think you draw too much of a dichotomy between "read sequences" and "not read sequences". I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that's a pretty large task - especially if you're in school, trying to work a demanding job, etc. I don't wish to speak for you, since you're not clear on the matter, but are people in the process of reading the sequences noise? I'm only in QM, and certainly wasn't there when I started posting, but I've gotten over 1000 karma (all of it on comments or discussion level posts). I'd like to think I've added something to the community.

Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn't want to make all of these points in that thread, since they were off topic to the original. When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I'd never been part of a... (read more)

5wedrifid
Get a few more (thousand?) karma and you may find getting karmassassinated doesn't hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable - it happens quite a lot though) and it doesn't bother me all that much. These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don't like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost! I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
7RobertLumley
I think if I were karmassassinated again I wouldn't care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn't. But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted - the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn't so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
3Wei Dai
I think you can't give more downvotes than your karma, so that person would need 5-10 sockpuppets with at least 15-20 (EDIT: actually 4-5) karma each. If someone is going to the trouble of doing that, it seems unlikely that they would just pick on you and nobody else (given that your writings don't seem to be particularly extreme in some way). Has anyone else experience something similar?
4thomblake
Creating sockpuppets for downvoting is easy. (kids, don't try this at home). Just find a Wikipedia article on a cognitive bias that we haven't had a top-level post on yet. Then, make a post to main with the content of the Wikipedia article (restated) and references to the relevant literature (you probably can safely make up half of the references). It will probably get in the neighborhood of 50 upvotes, giving you 500 karma, which allows 2000 comment downvotes. Even if those estimates are really high, that's still a lot of power for little effort. And just repeat the process for 20 biases, and you've got 20 sockpuppets who can push a combined 20 downvotes on a large number of comments. Of course, in the bargain Less Wrong is getting genuinely high-quality articles. Not necessarily a bug.
1steven0461
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days. (Based on a few minutes of research, it looks like this is legal if you link to the original article each time, but tell me if I'm wrong.)
1thomblake
Really, I think the main problem with this is that most of the work is identifying which ones are the 'relevant' articles.
0thomblake
I was implying a non-copy-paste solution. Still, interesting idea.
0steven0461
Yes; I didn't mean to say you were implying a copy-paste solution. But if we're speaking in the context of causing good articles to be posted and not in the context of thinking up hypothetical sock-puppeting strategies, whether it's copy-pasted or restated shouldn't matter unless the restatement is better-written than the original.
0thomblake
agreed
0othercriteria
Modulo the fake references, of course.
0thomblake
of course
-2RobertLumley
There's not much reason to do something like this, when you can arbitrarily upvote your own comments with your sockpuppets and give yourself karma.
0thomblake
But then those comments / posts will be correctively downvoted, unless they're high-quality. And you get a bunch more karma from a few posts than a few comments, so do both!
2Eugine_Nier
You can delete them afterwards, you keep karma from deleted posts.
6wedrifid
Let's keep giving the disgruntled script kiddies instructions! That's bound to produce eudaimonia for all!
0RobertLumley
We found one of the sockpuppets, and he had one comment that added nothing that was at like 13 karma. It wasn't downvoted until I was karmassassinated.
3pedanterrific
It's some multiple of your karma, isn't it? At least four, I think- thomblake would know.
1thomblake
Yes, 4x, last I checked.
1wedrifid
I should note that I have never actually been in your shoes. I haven't had any cases where there was unambiguous use of bulk sockpuppets. I've only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time). That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.) Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?

It was this thread.

Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they're risk averse and want to make sure they're doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn't like it.

3wedrifid
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
2RobertLumley
It didn't really help that I made my point very poorly.
2pedanterrific
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
1[anonymous]
It's great that you are reading the sequences. You are right it's not as simple as read them -> not noise, not read them -> noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise. The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that. Don't leave, keep reading. When you are done you will know what I'm getting at.
3RobertLumley
I think it's close to the end, right before/after the fun theory sequence? I've read some of the later posts just from being linked to them, but I'm not sure. And I quite intentionally avoid talking about things like AI, because I know you're right. I'm not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson's standpoint. I spent a three hour car drive for fun trying to answer the question "How would I go about making an AI" even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I'd make a discussion post about it, but I figured it was too obvious to not have been thought of before.
0David_Gerard
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. ("Not as bad as" is a fallacy, but that doesn't mean both things aren't bad.) edit: and now I'm going to ask why this rated a downvote. What does the downvoter want less of? edit 2: fair enough, "accepted" is wrong. I meant that it's a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
6RobertLumley
I think it was downvoted because sockpuppetry wasn't really "accepted" by LW, it was just one guy.
0David_Gerard
Yeah, "accepted" is connotationally wrong - I mean it's observed, and it's hard to do much about it.
0RobertLumley
To what extent does anyone except EY have moderation control over LW?
6Rain
There are several people capable of modifying or deleting posts and comments.
0Viliam_Bur
Ahem, on my side it was a case of bad pattern-matching. When I realized it, I deleted the reply I was writing here, and also removed the downvote. Perhaps you should have explained further why do you think sockpuppetry is bad. My original guess was that you speak about people having multiple votes from multiple accounts (I was primed by other comments in this thread) and I habitually downvote most comments speaking about karma. But now it seems to me that you are concerned with other aspects, such as anonymity and role-playing. But this is only a guess, I can't see it from your comment.
5David_Gerard
Yeah, bad explanation on my account. I'm not so concerned with roleplay accounts (e.g. Clippy), as with socking up to mass-downvote. (Getting initial karma is very easy.) Socking-up to be a dick to people also strikes me as problematic. I think I mean "observed" rather than "accepted", which implies a social norm.

My $0.02 (apologies if it's already been said; I haven't read all the comments): wanting to do Internet-based outreach and get new people participating is kind of at odds with wanting to create an specialized advanced-topics forum where we're not constantly rehashing introductory topics. They're both fine goals, but trying to do both at once doesn't work well.

LW as it is currently set up seems better optimized for outreach than for being an advanced-topics forum. At the same time, LW doesn't want to devolve to the least common denominator of the Internet. This creates tension. I'm about .6 confident that tension is intentional.

Of course, nothing stops any of us from creating invitation-only fora to which only the folks whose contributions we enjoy are invited. To be honest, I've always assumed that there exist a variety of more LW-spinoff private forums where the folks who have more specialized/advanced groundings get to interact without being bothered by the rest of us.

Somewhat relatedly, one feature I miss from the bad old usenet days is kill files. I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here. Concealing sufficiently downvoted comments is similar in principle, but not reliable in practice.

I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here.

My LessWrong Power Reader has a feature that allows you to mark authors as liked/disliked, which helps to determine which comments are expanded vs collapsed. Right now the weights are set so that if you've disliked an author, then any comment written by him or her that has 0 points or less, along with any descendants of that comment, will be collapsed by default. Each comment in the collapsed thread still has a visible header with author and points and color-coding to help you determine whether you still want to check it out.

6TheOtherDave
(blink) You are my new favorite person. I am, admittedly, fickle.
6John_Maxwell
And for discussion and top-level posts, there is already the friends feature: http://lesswrong.com/prefs/friends/ (You can also add someone as a friend from their user page.) There is something that appeals to me about this "roll your own exclusive forum" approach.
2Bugmaster
I am ashamed to say that I had no idea about the Friends feature. Thanks !
8Percent_Carbon
You're suggesting a strategy of tension? Aw. And they didn't invite nyan_sandwich. That's so sad. He or she should get together with other people who haven't been invited to Even Less Wrong and form their own. Then one day they can get together with Even Less Wrong like some NFL/AFL merger, only with more power to save the world. There would have to be a semaphore or something, somewhere. So these secret groups can let each other know they exist without tipping off the newbs.

There's probably no need for the groups to signal each other's existence.

When a new Secret Even Less Wrong is formed, members are previously formed Secret Even Less Wrongs who are still participating in Less Wrong are likely to receive secret invites to the new Secret Even Less Wrong.

Nyan_sandwich might set up his secret Google Group or whatever, invite the people he feels are worthy and willing to form the core of his own Secret Even Less Wrong, and receive in reply an invite to an existing Secret Even Less Wrong.

That might have already happened!

4TheOtherDave
Nothing nearly that Macchiavelian, more of a strategy of homeostasis through dynamic equilibrium.
6Armok_GoB
I have tried, and failed, to launch elitist spinof subcomunities like that multiple times.
2TheOtherDave
To what do you attribute the failures?
2Armok_GoB
Lack of interest, lack of exposure, lack of momentum.
0cousin_it
LW's period of fastest growth was due to Eliezer's posts that were accessible and advanced (and entertaining, etc.) Encouraging other people to do work like that could be more promising than splitting the goals as you propose.
[-]TimS140

Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I'm interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I've read most of the core posts of LW, but haven't gone through most of the sequences in any rigorous way (i.e. read them in order).

I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I've had several ideas for posts (in Discussion) that I don't post, but I don't think it meets the community's expected quality standard.

Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the "raising the sanity line" mission. That's not what I want.

[-][anonymous]210

Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus.

No martyrs allowed.

I don't propose simply disallowing people who havn't read everything from being taken seriously, if they don't say anything stupid. It's fine if you havn't read the sequences and don't care about AI or heavy philosophy stuff, I just don't want to read dumb posts about those topics that come from someone having not read the stuff.

As a matter of fact, I was careful to not propose much of anything. Don't confuse "here's a problem that I would like solved" with "I endorse this stupid solution that you don't like".

4TimS
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are "spouting garbage that's been discussed to death" and aren't being sufficiently punished for it, you could say that instead. If that's not what you are concerned about, then I have failed to comprehend your message. Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
[-]Emile100

I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity's main attempt to formalize morality, so I would expect some overlap with FAI.

I don't mind people who haven't read the sequences, as long as they don't start spouting garbage that's already been discussed to death and act all huffy when we tell them so; common failure modes are "Here's an obvious solution to the whole FAI problem!", "Morality all boils down to X", and "You people are a cult, you need to listen to a brave outsider who's willing to go against the herd like me".

8Vladimir_Nesov
If you're interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of "Don't feed the clueless (just downvote them)" (this post suggests widening the sense of "clueless"), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
4Normal_Anomaly
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren't many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was--or seeing some comment I upvoted be at -2, and not knowing what I'm missing. So I'd like to ask everyone: if you downvote one comment for being wrong, but think the poster isn't hopeless, please explain your downvote. It's the only way to make the person stop being wrong.
3Vladimir_Nesov
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn't have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently. I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
6wedrifid
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question. People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts - including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.) Trying to explain what is usually simply assumed - to a listener who is at least willing to communicate in good faith - can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place. There are many conversations where downvoting both sides of a discussion is advisable, yet it isn't conversations with the "Clueless" that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
5TheOtherDave
Startlingly, Googling "Debaters of Perfect Emptiness" turned up no hits. This is not the best of all possible worlds.
0wedrifid
Think "Lawyer", "Politician" or the bottom line.
8TheOtherDave
Sorry, I wasn't clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
0wedrifid
Oh, gotcha. I'm kind of surprised we don't have a post on it yet. Lax of me!
2TimS
I accept your criticism in the spirit it was intended - but I'm not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn't appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion) Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don't agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a "good thing." Note that your example is not that pattern, and I haven't responded to Clueless. C is anti-empiricism, but he hasn't shown anything that makes me think that he has anything valuable to contribute to the community - he's 100% confused. So it isn't worth my time to try to persuade him to be less wrong.
-1Vladimir_Nesov
I'm stating an expectation of a policy's effectiveness.
-1gRR
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet? If the goal was -100, then writing should stop now (prediction).
-2brilee
I'm not the one who downvoted you, but if I were to hazard a guess, I'd say your were downvoted because when you start off by saying "people like me", it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of "people like me", you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms "less wrong" and "more wrong" to describe elites and peasants of LW, but I'm using them to point out to you the identity you've painted for yourself.) Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, "Noobs will be noobs, let's give up".
7TimS
If by "giving up on trying to be less wrong," you mean I'm never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful. Raising the sanity line does not require any of those things.
-2brilee
Don't put up straw men; I never said that to be less wrong, you had to do all those things. "less wrong" represents a attitude towards the world, not an endpoint.
4TimS
Then I do not understand what you mean when you say I am "giving up on trying to be less wrong"
-2brilee
Could I get an explanation for the downvotes?
-7XiXiDu

I think the barrier of entry is high enough - the signal-to-noise ratio is high, and if you only read high-karma posts and comments you are guaranteed to get substance.

As for forcing people to read the entire Sequences, I'd say rationalwiki's critique is very appropriate (below). I myself have only read ~20% of the Sequences, and by focusing on the core sequences and highlighted articles, have recognized all the ideas/techniques people refer to in the main-page and discussion posts.

The "sequences"[9] are several collated series of Yudkowsky's blog posts, and there are eighteen sequences in all. The indexes for just the four "core sequences"[10] are somewhere north of 10,000 words. Those link to over a hundred and fifty 2,000-3,000-word blog posts. That's about 300,000-450,000 words for those four, and around a million words for the lot.[11] With a few million more words of often-relevant comments. For comparison, the Lord Of The Rings trilogy is 473,000 words.[12] As such, "You should try reading the sequences" is LessWrong for "fuck you."

[-][anonymous]120

You should try reading the other 80% of the sequences.

As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY's key points used later on identity and decision theory. So most LW readers seem not to have read it.

Suggestion: a straw poll on who's read which sequences.

I've seen enough of the QM sequence and know enough QM to see that Eliezer stopped learning quantum mechanics before getting to density matrices. As a result, the conclusions he draws from QM rely on metaphysical assumptions and seem rather arbitrary if one knows more quantum mechanics. In the comments to this post Scott Aaronson tries to explain this to Eliezer without much success.

0Douglas_Knight
Could you be specific about which conclusions seem arbitrarily based on which metaphysical assumptions?
0Eugine_Nier
I just answered a similar question in another thread here. Note: please reply there so we can consolidate discussions.
9Desrtopa
I've read it, but I took away less from it than any of the other sequences. Reading any of the other sequences, I can agree or disagree with the conclusion and articulate why. With the QM sequence, my response is more along the lines of "I can't treat this as very strong evidence of anything because I don't think I'm qualified to tell whether it's correct or not." Eliezer's not a physicist either, although his level of fluency is above mine, and while I consider him a very formidable rationalist as humans go, I'm not sure he really knows enough to draw the conclusions he does with such confidence. I've seen the QM sequence endorsed by at least one person who is a theoretical physicist, but on the other hand, I've read Mitchell Porter's criticisms of Eliezer's interpretation and they sound comparably plausible given my level of knowledge, so I'm not left thinking I have much more grounds to favor any particular quantum interpretation than when I started.
9[anonymous]
A poll would be good. I've read the QM sequence and it really is one of the most important sequences. When I suggest this at meetups and such, people seem to be under the impression that it's just Eliezer going off topic for a while and totally optional. This is not the case, the QM sequence is used like you said to develop a huge number of later things.

The negative comments from physicists and physics students are sort of a worry (to me as someone who got up to the start of studying this stuff in second-year engineering physics and can't remember one dot of it). Perhaps it could do with a robustified rewrite, if anyone sufficiently knowledgeable can be bothered.

6Paul Crowley
The negative comments I've heard give off a strong scent of being highly motivated - in one case an incredible amount of bark bark bark about how awful they were, and when I pressed for details, a pretty pathetic bite. I'd like to get a physicist who didn't seem motivated to have an opinion one way or the other to comment. It would need to be someone who bought MWI - if the sole problem with them is that they endorse MWI then that's at least academically respectable, and if an expert reading them doesn't buy MWI then they'll be motivated to find problems in a way that won't be as informative as we'd like.

The Quantum Physics Sequence is unusual in that normally, if someone writes 100,000(?) words explaining quantum mechanics for a general audience, they genuinely know the subject first: they have a physics degree, they have had an independent reason to perform a few quantum-mechanical calculations, something like that. It seems to me that Eliezer first got his ideas about quantum mechanics from Penrose's Emperor's New Mind, and then amended his views by adopting many-worlds, which was probably favored among people on the Extropians mailing list in the late 1990s. This would have been supplemented by some incidental study of textbooks, Feynman lectures, expository web pages... but nonetheless, that appears to be the extent of it. The progression from Penrose to Everett would explain why he presents the main interpretive choice as between wavefunction realism with objective collapse, and wavefunction realism with no collapse. His prose is qualitative just about everywhere, indicating that he has studied quantum mechanics just enough to satisfy himself that he has obtained a conceptual understanding, but not to the point of quantitative competence. And then he has undertaken to convey ... (read more)

Excellent idea - done. Thank you!

2Rain
Result from Ron Maimon's review of the QM sequence: (more at the link from ciphergoth's post)
2XiXiDu
You could also ask for an independent evaluation of AI risks here.
8Paul Crowley
That seems less valuable. The QM sequences are largely there to set out what is supposed to be an existing, widespread understanding of QM. No such understanding exists for AI risk.
0whowhowho
So why isnt that pointed out anywhere? EY seems oddly oblivious to his potential -- indeed likely -- limitations as an autodictat.
0Alsadius
This was a big concern I had reading it. Much of it made sense to me, as someone who has had formal education in basic quantum, and some of it felt very illuminating(the waveform-addition stuff in particular was taught far better than my quantum prof ever managed), but I'm always skeptical of people claiming Truth of a controversy in a highly technical field with no actual training in that field. I've always preferred many-worlds, but I would never claim it is the sole truth in the sort of way that EY did.
0XiXiDu
What reason do I have to believe that this risk isn't even stronger when it comes to AI?
1David_Gerard
It's not clear how to compare said risk - "quantum" is far more widely abused - but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
-3Alsadius
What, you mean the part where we're discussing a field that's still highly theoretical, with no actual empirical evidence whatsoever, and then determining that it is definitely the biggest threat to humanity imaginable and that anyone who doesn't acknowledge that is a fool?
4Paul Crowley
This is one of the classic straw men, adaptable to any purpose.
-3Alsadius
Mockery is generally rather adaptable, yes.
0David_Gerard
I suspect a lot of it is "oh dear, someone saying 'quantum'" fatigue. But that sounds a plausible approach.
8amit
Yes. No, as far as I can tell.
-1David_Gerard
Probably not, then. (The decision theory posts were where I finally hit a tl;dr wall.)
6wedrifid
Something I recall noticing at the time I read said posts is that some of the groundwork you mention didn't necessarily need to be in with the QM. Sure, there are a few points that you can make only by reference to QM but many of the points are not specifically dependent on that part of physics. (ie. Modularization fail!)
5David_Gerard
That there are no individual particles is something of philosophical import that it'd be difficult to say without bludgeoning the point home, as the possibility is such a strong implicit philosophical assumption and physics having actually delivered the smackdown may be surprising. But yeah, even that could be moved elsewhere with effort. But then again, the sequences are indeed being revised and distilled into publishable rather than blog form ...
6wedrifid
Yes, that's the one thing that really relies on it. And the physics smackdown was surprising to me when I read it. Ideal would seem to be having the QM sequence then later having an identity sequences wherein one post does an "import QM;". Of course the whole formal 'sequence' notion is something that was invented years later. These are, after all, just a stream of blog posts that some guy spat out extremely rapidly. At that time they were interlinked as something of a DAG, with a bit of clustering involved for some of the bigger subjects. I actually find the whole 'sequence' focus kind of annoying. In fact I've never read the sequences. What I have read a couple of times is the entire list of blog posts for several years. This includes some of my favorite posts which are stand alone and don't even get a listing in the 'sequences' page.

Yes! I try to get people to read the "sequences" in ebook form, where they are presented in simple chronological order. And the title is "Eliezer Yudkowsky, blog posts 2006-2010".

7[anonymous]
Totally, there are whole sequences of really good posts that get no mention in the wiki.

Working on it.

In all seriousness though, I often find the Sequences pretty cumbersome and roundabout. Eliezer assumes a pretty large inferential gap for each new concept, and a lot of the time the main point of an article would only need a sentence or two for it to click for me. Obviously this makes it more accessible for concepts that people are unfamiliar with, but right now it's a turn-off and definitely is a body of work that will be greatly helped by being compressed into a book.

1Alsadius
Fuck you.
6David_Gerard
Downvoted for linking to that site. ... what?
-2Alsadius
It's both funny and basically accurate. I'd say it's a perfectly good link.
7[anonymous]
David is making a joke, because he wrote most of the content of that article.

Tetronian started the article, so it's his fault actually, even if he's pretty much moved here.

I have noted before that taking something seriously because it pays attention to you is not in fact a good idea. Every second that LW pays a blind bit of notice to RW is a second wasted.

See also this comment on the effects of lack of outside world feedback, and a comparison to Wikipedia (which basically didn't get any outside attention for four or five years and is now part of the infrastructure of society, at which I still boggle).

And LW may or may not be pleased that even on RW, when someone fails logic really badly the response is often couched in LW terms. So, memetic infections ahoy! Think of RW as part of the Unpleasable Fanbase.

2thomblake
Memetic hazard warning!
3David_Gerard
ITYM superstimulus ;-)
0wedrifid
Not really. There is content there that is not completely useless. Especially if the 'seconds wasted' come out of time that would have otherwise been spent on lesswrong itself.
0Alsadius
Ahhhh. Well, that flips my downvote.
0wedrifid
Oh, that explains a lot!
4drethelin
It's not a barrier to entry if no one actually HAS to surmount it.
-1Alsadius
Yeah, but if we make a policy of abusing and hounding out anyone who hasn't, it's not much better.
-3faul_sname
Kahneman and Tversky's Thinking Fast and Slow is basically the sequences + some statistics - AI and metaethics in (shorter) book form (well actually, the other way around, as the book was there first). So perhaps we should say "read the sequences, or that book, or otherwise learn the common mistakes".
6Paul Crowley
Strongly disagree; I think there is fairly limited overlap between the two.
5endoself
Your comment describes (or at least intends to describe as per the people disagreeing with you) Judgment under Uncertainty: Heuristics and Biases, not Thinking Fast and Slow.
2wedrifid
Can someone verify this for me? I've heard good things about the authors but my prior for that book containing everything in the (or most of the) sequences is rather low.

I disagree with the grandparent. I read the book a while ago having already read most of the Sequences -- I think that the book gives a fairly good overview of heuristics and biases but doesn't do as good of a job in turning the information into helpful intuitions. I think that the Sequences cover most (but not quite all) of what's covered in the book, while the reverse is not true.

Lukeprog reviewed the book here: his estimate is that it contains about 30% of the Core Sequences.

2David_Gerard
The reasoning for downvote on this suggestion is not clear. What does the downvoter actually want less of?
4Dorikka
As the suggestion stands, it's at -2. I'm not downvoting it because I don't think it's so bad as to be invisible, but saying that the book is a good substitute for the sequences seems inaccurate enough to downvote. My other comment here contains (slightly) more of an explanation.
[-]brilee130

From Shirky's Essay on online groups: "The Wikipedia right now, the group collaborated online encyclopedia is the most interesting conversational artifact I know of, where product is a result of process. Rather than "We're specifically going to get together and create this presentation" it's just "What's left is a record of what we said."

When somebody goes to a wiki, they are not oging there to discuss elementary questions that have already been answered; they are going there to read the results of that discussion. Isn't this basically what the OP wants?

Why aren't we using the wiki more? We have two modes of discussion here: discussion board, and wiki. The wiki serves more as an archive of the posts that make it to main-page level, meaning that all the hard work of the commenters in the discussion boards is often lost to the winds of time. (Yes, some people have exceptionally good memory and link back to them. But this is obviously not sustainable.) If somebody has a visionary idea on how to lubricate the process of collating high-quality comments and incorporating them into a wiki-like entity, then I suspect our problem could be solved.

[-][anonymous]110

Why aren't we using the wiki more?

This is a really good question.

I don't use the wiki because me LW acount is not valid there. You need to make a seperate acocunt for the wiki.

That seems like an utterly stupid reason in retrospect, but I imagine that's a big reason why no one is wikiing.

0eurg
It is explicitly mentioned (somewhere) that the wiki is only for referencing ideas and terms that have been used/discussed/explained in LW posts. So, yes, inconvenience, but not solely.
[-]XiXiDu120

The best way to become more exclusive while not giving the impression of a cult, or by banning people, is by raising your standards and being more technical. As exemplified by all the math communities like the n-Category Café or various computer science blogs (or most of all technical posts of lesswrong).

[-]Rain120

Stop using that word.

7wedrifid
In fact, edit your post now please Nyan. Apart from that it's an excellent point. "Community", "website" or just about anything else. "You're a ...." is already used as a fully general counterargument. Don't encourage it!
[-][anonymous]170

I want to keep the use of the word, but to hide it from google I have replaced it with it's rot13: phyg

And now we can all relax and have a truly uninhibited debate about whether LW is a phyg. Who would have guessed that rot13 has SEO applications?

4radical_negative_one
Just to be clear, we're all reading it as-is and pronouncing it like "fig", right? Because that's how i read it in my head.
2pedanterrific
I hope so, or this would make even less sense than it should.
1Alicorn
I've been pronouncing it to rhyme with the first syllable in "tiger".
0[anonymous]
No; stop. This 'fix' is ineffective and arguably worse.
-1David_Gerard
The C-word is still there in the post URL!
6David_Gerard
That's much better! (I hadn't realised the post titles were redundant in Reddit code ...)
6[anonymous]
Upvoted for agreeing and for reminding me to re-read a certain part of the sequences. I loath fully general counterarguments, especially that one. That being said, would it be appropriate for you to edit your own comment to remove said word? I don't know (to any signifigant degree) how Google's search algorithms work, but I suspect that having that word in your comment also negatively affects the suggested searches.
7wedrifid
Oh, yeah, done.
6[anonymous]
You mean the one that shouldn't be associated with us in google's search results? I'll think about it.
6pedanterrific
Suggestion: "Our Ult-Cay Is Not Exclusive Enough"

I feel pain just looking at that sentence.

I sure as hell hope self-censorship or encryption for the sake of google results isn't going to become the expected norm here. It's embarassingly silly, and, paradoxically, likely to provide ammunition for anyone who might want to say that we are this thing-that-apparently-must-not-be-named. Wouldn't be overly suprrised if these guys ended up mocking it.

The original title of the post had a nice impact, the point of the rhetorical technique used was to boldly use a negatively connotated word. Now it looks weird and anything but bold.

Also, reading the same rot13-ed word multiple times caused me to learn a small portion of rot13 despite my not wanting to. Annoying.

3pedanterrific
Yes, well... I don't give a phyg.
3David_Gerard
Your comment would have been ridiculously enhanced by this link.
-2CronoDAS
What word?
[-][anonymous]100

The only word that shouldn't be used for reasons that extend to not even identifying it. (google makes no use/mention distinction).

4Nisan
"In a riddle whose answer is chess, what is the only prohibited word?"
-24TwistingFingers

Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.

[-][anonymous]110

Maybe we should charge thousands of dollars for access to the sequences as well? And hire some lawyers...

More seriously, I wonder what people's reaction would be to a newbie section that wouldn't be as harsh as the now-much-harsher normal discussion. This seems to go over well on the rest of the internet.

Sort of like raising the price and then having a sale...

5Bugmaster
This sounds like a good idea, but I think it might be too difficult to implement in practice, as determined users will bend their efforts toward guessing the password in order to gain access to the coveted Inner Circle. This isn't a problem for that other phyg, because their access is gated by money, not understanding.
2thescoundrel
I think the freemasons have this one solved for us: instead of a passwords, we use interview systems, where people of the level above have to agree that you are ready before you are invited to the next level. Likewise, we make it known that helpful input on the lower levels is one of the prerequisites to gaining a higher level- we incentivise constructive input on the lower tiers, and effectively gate access to the higher tiers.
9Bugmaster
Why does this solution need to be so global ? Why don't we simply allow users to blacklist/whitelist other users as they see fit, on an individual basis ? This way, if someone wants to form an ultra-elite cabal, they can do that without disturbing the rest of the site for anyone else.
4Alsadius
So, who is going to sit on the interview committee to control access to a webforum? You're asking more of the community than it will ever give you, because what you advocate is an absurd waste of time for any actual person.
4hesperidia
The SCP Foundation creepypasta wiki used to use a very complex application system, designed to weed out those with insufficient writing skill. It turned away a fairly significant number of potential writers due to its sheer size. It was also maintained through Google Docs by one dedicated admin for several years. I'm not sure anyone here would give up their free time to maintaining bureaucracy rather than winning, and it seems counterproductive to me, but it's theoretically possible that it can be kept to a part-time job.
0thescoundrel
That's possible- it may be that the cost of doing this effectively is not worth the gain, or that there is a less intensive way to solve this issue. However, I think there could be benefits to a tiered structure- perhaps even have the levels be read only for those not there yet- so everyone can read the high signal to noise, but we still make sure the protect it. I do know there is much evidence to suggest the prestige among even small groups is enough to motivate people to do things that normally would be considered an absurd waste of time.
2Percent_Carbon
You're not proposing a different system, you're just proposing additional qualifiers.
1TrE
Sounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher 'levels' considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about 'free will' with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won't be able to. For this to be implemented, we'd need a user rights system with the respective discussion sections as well as a way to determine the 'level' of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don't give any feedback about the scoring other than 'you leveled up!' and 'Your score wasn't good enough, re-read these sequences:__ and try again later.' And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.
0buybuydandavis
Instead of setting up gatekeepers, why not let people sort themselves first? No one wants to be a bozo. We have different interests and aptitudes. Set up separate forums to talk about the major sequences, so there's some subset of the sequences you could read to get started. I'd suggest too that as wonderful as EY is, he is not the fount of all wisdom. Instead of focusing on getting people to shut up, how about focusing on getting people to add good ideas that aren't already here?
0Viliam_Bur
Depending on other factors, it could also resemble a school system.
0[anonymous]
Rationology? Edit: I apologize.
[-]tut70

What you want is an exclusive club. Not a cult or phyg or whatever.

4gwern
There's only one letter's difference between 'club' and 'phyg'!
2tut
And there is only one letter's difference between paid and pain. The meaning of an English word is generally not determined by the letters it contains.

I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any prod... (read more)

[-][anonymous]140

I don't see why having the debate at a higher level of knowledge would be a bad thing. Just because everyone is familar with a large bit of useful common knowledge doesn't mean no-one disagrees with it, or that there is nothing left to talk about. There are some LW people who have read everything and bring up interesting critiques.

Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting. Having read the sequences it's the same sort of boring as someone explaining for the millionth time that "no, technological progress or happyness is not a sufficient goal to produce a valuable future, and yes, an AI coded with that goal would kill us all, and it would suck".

Not being an ultra-exclusive "phyg" is one of such strategies.

The point of my post was that that is not an acceptable solution.

-1Bugmaster
Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly: No, it's not very interesting to you and me, but to the "uneducated folks" whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it's easily correctable. However, kicking people out for being ignorant does not facilitate such correction. What's your solution, then ? You say, To me, "more exclusive LW" sounds exactly like the kind of solution that doesn't work, especially coupled with "enforcing a little more strongly that people read the sequences" (in some unspecified yet vaguely menacing way).
2Zetetic
Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren't willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people. I've participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone's head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I've been a part of in this respect, but I have noticed a slow decline in this department. All of that said, I'm not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn't sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I've been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I'd like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
3Bugmaster
I agree with pretty much everything you said (except for the sl4 stuff, because I haven't been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions -- outreach as well as technical. I'm not proposing that we set the barrier to entry at zero; I merely think that the guideline, "you must have read and understood all of the Sequences before posting anything" sets the barrier too high. I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it's possible that I'm biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
4Zetetic
Disagreement is perfectly fine by me. I don't agree with the entirety of the sequences either. It's disagreement without looking at the arguments first that bothers me.
1[anonymous]
What is the difference between knowledge and opinion? Are the points in the sequences true or not? Read map and territory, and understand the way of Bayes. The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there's no qualitative difference between opinion and fact. I don't have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don't want a particular type of solution (relaxing LW's phygish standards), doesn't mean I support any particular straw-solution.
4Bugmaster
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can't tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO. See, it's exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words. I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, "rocks tend to fall down" is a fact, but "the Singularity is imminent" is an opinion -- in my opinion -- and so is "we should kick out anyone who hadn't read the entirety of the Sequences". When you said "we should make LW more exclusive", what did you mean, then ? In any case, I do have a solution for you: why don't you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own "phyg" and make it as exclusive as you want.
6Viliam_Bur
This would disrupt the flow of discussion. I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but... Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can't filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won't, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too. People have evolved some communication strategies that don't work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn't tell the person openly to buzz off. But when we speak online, and I ignore someone's comments, you don't see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.) Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
0Bugmaster
I see, so you felt that the comments of "smart" (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to "stupid" (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual "smart" people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again. Slashdot has something like this (though not exactly). I think it's a neat idea. If you implemented this, I'd even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side. And everyone's assumptions are different, which is why I'm very much against global solutions such as "ban everyone who hadn't read the Sequences", or something to that extent. Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are -- which is what nearly always happens when people decide to shut out dissenting voices. That's just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
0Viliam_Bur
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting. It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.) If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas. I also wouldn't like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement. I think a reasonble approach is to calculate the probability of "reasonable disagreement" based on the previous comments. This is something that we approximately do in real life -- based on our previous experience we take some people's opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
0Bugmaster
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel. This sounds pretty similar to Google's PageRank, only for comments instead of pages. Should be doable. Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else. Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we'd have a whole lot of other problems to worry about :-) That said, I personally tend to distrust my emotions. I'd rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn't want to join a darknet such as yours. That's just me though, your experience is probably different.
6[anonymous]
I mean that I'd like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don't think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I'm not afraid of being a phyg. That's all, nothing revolutionary.
2Bugmaster
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn't enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is. I'm not afraid of it either, I just don't think that power-sliding down a death spiral is a good idea. I don't need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
4wedrifid
Specifically 'the way of'. Would you have the same objection with 'and understand how bayesian updating works'? (Objection to presumptuousness aside.)
8Bugmaster
Probably. The same sentiment could be expressed as something like this: This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
-2wedrifid
Educating people is like that! What I would have said about the phrasing is that it is wrong.
-3Bugmaster
Merely telling people that they aren't worthy is not very educational; it's much better to tell them why you think they aren't worthy, which is where the links come in. Sure, but I have no problem with people being wrong, that's what updating is for :-)
1wedrifid
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points - your retort seems odd. Huh again? You seemed to have missed a level of abstraction.
-10Percent_Carbon

I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any productive debate could occur. People who wholeheartedly agree with each other tend not to debate.

A 'debate club' mindset is one of the things I would try to avoid. Debates emerge when there are new ideas to be expressed and new outlooks or bodies of knowledge to consider - and the supply of such is practically endless. You don't go around trying to artificially encourage an environment of ignorance just so some people are sufficiently uninformed that they will try to argue trivial matters. That's both counterproductive and distasteful.

I would not be at all disappointed if a side effect of maintaining high standards of communication causes us to lose some participants who "come to Less Wrong specifically for the debates". Frankly, that would be among the best things we could hope for. That sort of mindset is outright toxic to conversations and often similarly deleterious to the social atmosphere.

0Bugmaster
I wasn't suggesting we do that, FWIW. I think there's a difference between flame wars and informed debate. I'm in favor of the latter, not the former. On the other hand, I'm not a big fan of communities where everyone agrees with everyone else. I acknowledge that they can be useful as support groups, but I don't think that LW is a support group, nor should it become one. Rationality is all about changing one's beliefs, after all...
-2Alsadius
Debate is a tool for achieving truth. Why is that such a terrible thing?
-1wedrifid
I didn't say it was. Please read again.
0Alsadius
You said that we should avoid debate because it's bad for the social atmosphere. I'm not seeing much difference.
-1wedrifid
No I didn't. I said we should avoid creating a deliberate environment of ignorance just so that debate is artificially supported. To the extent that debate is a means to an end it is distinctly counterproductive to deliberately sabotage that same end so that more debate is forced. See also: Lost purpose.
0Alsadius
Upon rereading, I think I see what you're getting at, but you seem to be arguing from the principle that creating ignorance is the preferred way to create debate. That seems ahem non-obvious to me. There's no shortage of topics where informed debate is possible, and seeking to debate those does not require(and, in fact, generally works against) promoting ignorance. Coming here for debate does not imply wanting to watch an intellectual cripplefight.
1wedrifid
I seem to be coming from a position of making a direct reply to Bugmaster with the specific paragraph I was replying to quoted. That should have made the meaning more obvious to you. Which is what I myself advocated with:
2MarkusRamikin
(italics mine) How did you arrive at that idea? The point isn't to agree with the stuff, but to be familiar with it, with standard arguments that the Sequences establish. If you tried to talk advanced mathematics/philosophy/whatever with people, and didn't know the necessary math/philosophy/whatever, people would tell you some equivalent of "read the sequences". This is not the rest of the Internet, where everyone is entitled to their opinion and the result is that discussions never get anywhere (in reality, nobody is really interested in anyone's mere opinion, and the result is something like this). If you're posting uninformedly and rehashing old stuff or committing errors the core sequences teach you not to commit, you're producing noise. This is what i love about LW. There is an actual signal to noise ratio, rather than a sea of mere opinion.
3Bugmaster
nyan_sandwich said that the Sequences contain not merely arguments, but knowledge. This implies a rather high level of agreement with the material. I agree, but: I am perfectly fine with that, as long as they don't just say, "read all of the Sequences and then report back when you're ready", but rather, "your arguments have already been discussed in depth in the following sequence: $url". The first sentence merely dismisses the reader; the second one provides useful material.
6David_Gerard
Yesss ... the sequences are great stuff, but they do not reach the level of constituting settled science. They are quite definitely settled tropes, but that's a different level of thing. Expecting familiarity with them may (or may not) be reasonable; expecting people to treat them as knowledge is rather another thing.
0MarkusRamikin
Hm, that's a little tricky. I happen to agree that they contain much knowledge - they aren't pure knowledge, there is opinion there, but there is a considerable body of insight and technique useful to a rationalist (that is, useful if you want to be good at arriving at true beliefs or making decisions that achieve your goals). Enough that it makes sense to want debate to continue from that level, rather than from scratch. However, let's keep our eyes on the ball - that being the true expectation around here. The expectation is emphatically NOT that people should agree with the material in the Sequences. Merely that we don't have to re-hash the basics. Besides, if you manage to read a sequence, understand it, and still disagree, that means your reply is likely to be interesting and highly upvoted. Hm. Yeah, I wouldn't want anyone to actually be told "read all the sequences" (and afaik this never happens). It'd be unreasonable to, say, expect people to read the quantum mechanics sequence if they don't intend to discuss QM interpretations. However, problems like what is evidence and how to avoid common reasoning failures are relevant to pretty much everything, so I think an expectation of having read Map and Territory and Mysterious Answers would be useful.
4Bugmaster
Agreed. I emphatically agree with you there, as well; but by making this site more "phygvfu", we risk losing this capability. I agree that these are very useful concepts in general, but I still maintain that it's best to provide the links to these posts in context, as opposed to simply locking out anyone who hadn't read them -- which is what nyan_sandwich seems to be suggesting.
6MarkusRamikin
Trouble is, I'm not really sure what nyan_sandwich is suggesting, in specific and concrete terms, over and above already existing norms and practices. "I wish we had higher quality debate" is not a mechanism.

Upvoted.

I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.

There are those who think rationality needs to be learned through osmosis or whatever. That... (read more)

[-][anonymous]280

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.

This is a pretty hardcore assertion.

I am thinking of lukeprog's and Yvain's stuff as counterexamples.

I think of them (and certain others) as exceptions that prove the rule. If you take away the foundation of the sequences and the small number of awesome people (most of whom, mind you, came here because of Eliezer's sequences), you end up with a place that's indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.

Standard disclaimer edit because I have to: The exceptions don't prove the rule in the sense of providing evidence for the rule (indeed, they are technically evidence contrariwise), but they do allow you to notice it. This is what the phrase really means.

5David_Gerard
Considering how it was subculturally seeded, this should not be surprising. Remember that LW has proceeded in a more or less direct subcultural progression from the Extropians list of the late '90s, with many of the same actual participants. It's an online community. As such, it's a subculture and it's going to work like one. So you'll see the behaviour of an internet forum, with a bit of the topical stuff on top. How would you cut down the transhumanist subcultural assumptions in the LW readership? (If I ever describe LW to people these days it's something like "transhumanists talking philosophy." I believe this is an accurate description.)
6[anonymous]
Transhumanism isn't the problem. The problem is that when people don't read the sequences, we are no better than any other forum of that community. Too many people are not reading the sequences, and not enough people are calling them out on it.
4[anonymous]
Your edit updated me in favour of me being confused about this exception-rule business. Can you link me to something?
6Grognor
-Wikipedia (!!!) (I should just avoid this phrase from now on, if it's going to cause communication problems.)
4komponisto
I suspect the main cause of misunderstanding (and subsequent misuse) is omission of the relative pronoun "that". The phrase should always be "[that is] the exception that proves the rule", never "the exception proves the rule".
0thomblake
Probably even better to just include "in cases not so excepted" at the end.
0TheOtherDave
I'd always thought they prove the rule in the sense of testing it.
0[anonymous]
Exceptions don't prove rules. You are mostly right, which is exactly what I was getting at with the "promoted is the only good stuff" comment. I do think there is a lot of interesting, useful stuff outside of promoted, tho, it's just mixed with the usual programmer/atheist/transhumanist/etc-level stuff.

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This [people not reading them] is probably one of the reasons why.

Um, after I read the sequences I ploughed through every LW post from the start of LW to late 2010 (when I started reading regularly). What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored. (Though it's probably just me.)

At this point A Group Is Its Own Worst Enemy is apposite. Note the description of the fundamentalist smackdown as a stage communities go through. Note it also usually fails when it turns out the oldtimers have differing and incompatible ideas on what the implicit constitution actually was in the good old days.

tl;dr declarations of fundamentalism heuristically strike me as inherently problematic.

edit: So what about this comment rated a downvote?

edit 2: ah - the link to the Shirky essay appears to be giving the essay in the UK, but Viagra spam in the US o_0 I've put a copy up here.

6Eugine_Nier
I suspect that's because it's poorly indexed. This should be fixed.
6[anonymous]
This is very much why I have only read some of it. If the more recent LW stuff was better indexed, that would be sweet.
0wedrifid
Hey, I think "Dominions" should be played but do want to play it and did purchase the particular object at the end of the link. I don't understand why you linked to it though.
2Eugine_Nier
The link text is a quote from the game description.
0wedrifid
Ahh, now I see it. Clever description all around!
3David_Gerard
Yeah, I didn't read it from the wiki index, I read it by going to the end of the chronological list and working forward.
3[anonymous]
Am I in some kind of internet black-hole? That link took me to some viagra spam site.
7David_Gerard
It's a talk by Clay Shirky, called "A Group Is Its Own Worst Enemy". I get the essay ... looking in Google, it appears someone's done some scurvy DNS tricks with shirky.com and the Google cache is corrupted too. Eegh. I've put up a copy here and changed the link in my comment..
0TimS
shirky.com/writings/group_enemy.html ???
0TimS
I thought it was great. Very good link.
6David_Gerard
It's a revelatory document. I've seen so many online communities, of varying sizes, go through precisely what's described there. (Mark Dery's Flame Wars (1994) - which I've lost my copy of, annoyingly - has a fair bit of material on similar matters, including one chapter that's a blow-by-blow description of such a crisis on a BBS in the late '80s. This was back when people could still seriously call this stuff "cyberspace." This leads me to suspect the progression is some sort of basic fact of online subcultures. This must have had serious attention from sociologists, considering how rabidly they chase subcultures ...) LW is an online subcultural group and its problems are those of online subcultural groups; these have been faced by many, many groups in the past, and if you think they're reminiscent of things you've seen happen elsewhere, you're likely right.
3TimS
Maybe if you reference Evaporative Cooling, which is the converse of the phenomena you describe, you'd get a better reception?
0David_Gerard
I'm thinking it's because someone appears to have corrupted DNS for Shirky's site for US readers ... I've put up a copy myself here. I'm not sure it's the same thing as evaporative cooling. At this point I want a clueful sociologist on hand.
[-]TimS100

Evaporative cooling is change to average belief from old members leaving.

Your article is about change to average belief from new members joining.

2David_Gerard
Sounds plausibly related, and well spotted ... but it's not obvious to me how they're functionally converses in practice to the degree that you could talk about one in place of talking about the other. This is why I want someone on hand who's thought about it harder than I have. (And, more appositely, the problem here is specifically a complaint about newbies.)
3TimS
I wasn't suggesting that one replaced the other, but that one was conceptually useful in thinking about the other.
3David_Gerard
Definitely useful, yes. I wonder if anyone's sent Shirky the evaporative cooling essay.

I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else.

I don't consider myself a particularly patient person when it comes to tolerating ignorance or stupidity but even so I don't much mind if people here contribute without having done much background reading. What matters is that they don't behave like an obnoxious prat about it and are interested in learning things.

I do support enforcing high standards of discussion. People who come here straight from their highschool debate club and Introduction to Philosophy 101 and start throwing around sub-lesswrong-standard rhetoric should be downvoted. Likewise for confident declarations of trivially false things. There should be more correction of errors that would probably be accepted (or even rewarded) in many other contexts. These are the kind of thing that don't actively exclude but do have the side effect of raising the barrier to entry. A necessary sacrifice.

3[anonymous]
The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.

The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.

Isn't the metaethics sequence not liked very much? I haven't read it in a while, and so I'm not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche's moral thought.

Downvoting people for not understanding that beliefs constrain expectation I'm okay with. Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.

Downvoting people for not understanding that beliefs constrain expectation I'm okay with.

Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret "belief" in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs "should" be about anticipation, but LessWrong, like everybody else, can't reliably separate descriptive and normative claims, which is exactly why this "beliefs constrain anticipation" thing is misleading. ...There's a neat level-crossing thingy in there.

Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.

EY thinking of meta-ethics as a "solved problem" is one of the most obvious signs that he's very spotty when it comes to philosophy and can't really be trusted to do AI theory.

(Apologies if I come across as curmudgeonly.)

2wedrifid
He does? I know he doesn't take it as seriously as other knowledge required for AI but I didn't think he actually thought it was a 'solved problem'.
8Will_Newsome
From my favorite post and comments section on Less Wrong thus far:
5wedrifid
Yes, it looks like Eliezer is mistaken there (or speaking hyperbole). I agree with: ... but would weaken the claim drastically to "Take metaethics, a clearly reducible problem with many technical details to be ironed out". I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label "Deeply Mysterious". This places me approximately equidistant from your respective positions.
4Will_Newsome
I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn't characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls "right"; I still don't know what argument, technical or non-technical, could justify such an intuition, and I don't know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it. (Mildly inebriated, apologies for errors.)
5gjm
Are you sure Eliezer does equate reflective consistency with alignment with what-he-calls-"right"? Because my recollection is that he doesn't claim either (1) that a reflectively consistent alien mind need have values at all like what he calls right, or (2) that any individual human being, if made reflectively consistent, would necessarily end up with values much like what he calls right. (Unless I'm awfully confused, denial of (1) is an important element in his thinking.) I think he is defining "right" to mean something along the lines of "in line with the CEV of present-day humanity". Maybe that's a sensible way to use the word, maybe not (for what it's worth, I incline towards "not") but it isn't the same thing as identifying "right" with "reflectively consistent", and it doesn't lead to a risk of confusion if the two turn out to have different referents (because they can't).
1thomblake
He most certainly does not.
4steven0461
Relevant quote from Morality as Fixed Computation:
0thomblake
Thanks - I hope you're providing that as evidence for my point.
0steven0461
Sort of. It certainly means he doesn't define morality as extrapolated volition. (But maybe "equate" meant something looser than that?)
1Will_Newsome
Aghhhh this is so confusing. Now I'm left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai's quote so I don't know if they'd interpret it as evidence of their position too! I guess I'll just assume I'm wrong in the meantime.
0Will_Newsome
Now two people have said the exact opposite things both of which disagree with me. :( Now I don't know how to update. I plan on re-reading the relevant stuff anyway.
0gjm
If you mean me and thomblake, I don't see how we're saying exact opposite things, or even slightly opposite things. We do both disagree with you, though.
1Will_Newsome
I guess I can interpret User:thomblake two ways, but apparently my preferred way isn't correct. Let me rephrase what you said from memory. It was like, "right is defined as the output of something like CEV, but that doesn't mean that individuals won't upon reflection differ substantially". User:thomblake seemed to be saying "Eliezer doesn't try to equate those two or define one as the other", not "Eliezer defines right as CEV, he doesn't equate it with CEV". But you think User:thomblake intended the latter? Also, have I fairly characterized your position?
2gjm
I don't know whether thomblake intended the latter, but he certainly didn't say the former. I think you said "Eliezer said A and B", thomblake said "No he didn't", and you are now saying he meant "Eliezer said neither A nor B". I suggest that he said, or at least implied, something rather like A, and would fiercely repudiate B.
1thomblake
I definitely meant the latter, and I might be persuaded of the former. Though "define" still seems like the wrong word. More like, " 'right' is defined as *point at big blob of poetry*, and I expect it will be correctly found via the process of CEV." - but that's still off-the-cuff.
1Will_Newsome
Thanks much; I'll keep your opinion in mind while re-reading the meta-ethics sequence/CEV/CFAI. I might be being unduly uncharitable to Eliezer as a reaction to noticing that I was unduly (objectively-unjustifiably) trusting him. (This would have been a year or two ago.) (I notice that many people seem to unjustifiably disparage Eliezer's ideas, but then again I notice that many people seem to unjustifiably anti-disparage (praise, re-confirm, spread) Eliezer's ideas;—so I might be biased.) (Really freaking drunk, apologies for errors, e.g. poltiically unmotivated adulation/anti-adulation, or excessive self-divulgation. (E.g., I suspect "divulgation" isn't a word.))
1thomblake
Not to worry, it means "The act of divulging" or else "public awareness of science" (oddly).
0[anonymous]
I mean, it's not so odd. di-vulgar-tion; the result of making public (something).
0thomblake
Well, divulge divulgate divulgation But yeah, I just find it odd that it's a couple of steps removed from the obvious usage. I ask myself, "Why science specifically?" and "Why public awareness rather than making the public aware?"
1wedrifid
If I understand you correctly then this particular example I don't think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing. I suspect I agree with some of your objections to various degrees.

Part of my concern about Eliezer trying to build FAI also stems from his treatment of metaethics. Here's a caricature of how his solution looks to me:

Alice: Hey, what is the value of X?

Bob: Hmm, I don't know. Actually I'm not even sure what it means to answer that question. What's the definition of X?

Alice: I don't know how to define it either.

Bob: Ok... I don't know how to answer your question, but what if we simulate a bunch of really smart people and ask them what the value of X is?

Alice: Great idea! But what about the definition of X? I feel like we ought to be able to at least answer that now...

Bob: Oh that's easy. Let's just define it as the output of that computation I just mentioned.

2amit
I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.). (Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn't consitute as massive progress as it might seem.)

The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts.

BTW, I've had numerous "wow" moments with philosophical insights, some of which made me spend years considering their implications. For example:

  • Bayesian interpretation of probability
  • AI / intelligence explosion
  • Tegmark's mathematical universe
  • anthropic principle / anthropic reasoning
  • free will as the ability to decide logical facts

I expect that a correct solution to metaethics would produce a similar "wow" reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.

2stoat
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I'd like to learn about it, but my searches failed.
3Wei Dai
I never wrote a post on it specifically, but it's sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I'm not sure if he would agree with my formulation.
-1XiXiDu
What is "you"? And what is "deciding"? Personally I haven't been able to come to any redefinition of free will that makes more sense than this one. I haven't read the free will sequence. And I haven't read up on decision theory because I wasn't sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of "deciding" from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you'd still be left with the problem of identity.
-6Will_Newsome
0Wei Dai
It's mentioned here: ETA: Just in case you're right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just "morality is a fixed abstract computation", then I'd ask, "If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don't you think a complete "solved" metaethics should explain how morality differs from rationality?"
0gRR
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
3Eugine_Nier
How so? Every argument I've heard for why morality is arbitrary applies just as well to rationality.
3gRR
If we assume some kind of mathematical realism (which seems to be necessary for "abstract computation" and "uniqueness" to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true. Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique. Of course, all these points are made by a fallible human brain and so may be wrong. But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner's Dilemma when playing against the right opponents.
1Will_Newsome
What is the distinction you are making between rationality and morality, then? What makes you think the former won't be swallowed up by the latter (or vice versa!) in the limit of infinite reflection? (Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.) ETA: I don't understand how my comments can be so awesome when I'm obviously so freakin' drunk. ;P . Maybe I should get drunk all the freakin' time. Or study Latin all the freakin' time, or read the Bible all the freakin' time, or ponder how often people are obviously wrong when they use the phrase "all the freakin' time" (let alone "freakin[']") (especially when they use the phrase "all the freakin' time" all the freakin' time, naturally-because-reflexively)....
0gRR
That was the distinction - one is universal, another arbitrary, in the limit of infinite reflection. I suppose, "there is nothing arbitrary" is a valid (consistent) position, but I don't see any evidence for it.
1Will_Newsome
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn't as coherent as you.) I'll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.) (It's probably obvious that I'm pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
1gRR
Cognitivist yes, moral realist, no. IIUC, it's EY's position ("morality is a computation"), so naturally it's the typical LessWrong position. Universally valid statements must have universally-available evidence, no? Really nothing like LSD, which makes it impossible to write anything at all, at least for me.
0Eugine_Nier
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
0gRR
Or equivalent ones. But no matter where it started, it won't arrive at different primitive-recursive truths, at least according to my brain's current understanding. Is there significant difference? Wherever there are regularities in physics, there's math (=study of regularities). Where no regularities exist, there's no rationality.
0Eugine_Nier
What about the poor beings with an anti-iductive prior? More generally read this post by Eliezer.
0gRR
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
0Eugine_Nier
My point, as I stated above, is that every argument I've heard against universality of morality applies just as well to rationality. I agree with your statement: I would also agree with the following: The possible mind design space is of course huge and contains lots of immoral minds, but here I am arguing about universality of morality.
0gRR
But rationality is defined by external criteria - it's about how to win (=achieve intended goals). Morality doesn't have any such criteria. Thus, "rational minds" is a natural category. "Moral minds" is not.
0David_Gerard
Yeah: CEV appears to just move the hard bit. Adding another layer of indirection.
2Eugine_Nier
To take Eliezer's statement one meta-level down:
1XiXiDu
What did he mean by "I tried that..."?
1Will_Newsome
I'm not at all sure, but I think he means CFAI.
1Mitchell_Porter
Possibly he means this.
0whowhowho
He may have soleved it, but if only he or someone else could say what the solution was.
1Vaniver
Can you give examples of beliefs that aren't about anticipation?
8wedrifid
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don't relate to things that leave historical footprints. If you'll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities. In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of 'belief' apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
4Will_Newsome
Beliefs that aren't easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. "communism is good" with "correctly implemented communism is good", or "whites and blacks have equal average IQ" with "whites and blacks would have equal average IQ if they'd had the same cultural privileges/disadvantages". (Apologies for the necessary political examples. Please don't use this as an opportunity to talk about communism or race.) Many "beliefs" that aren't politically relevant—which excludes most scientific "knowledge" and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like "do I have one hand, two hands, or three hands?" or "how do I get back to my house from my workplace?" aren't generally beliefs so much as knowledge, and in my opinion "knowledge" is not only epistemologically but cognitively-neurologically a more accurate description, though I don't really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn't try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn't meant to describe or solve, even if it's technically possible to do so.
0Eugine_Nier
I believe the common to term for that mistake is "no true Scotsman".
2Vaniver
What do we lose by saying that doesn't count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don't separate out ones we can measure and ones we can't, but reality does separate those, and our terminology fits reality)? Something else?
1Eugine_Nier
So if someone you cared about is leaving your future light cone, you wouldn't care if he gets horribly tortured as soon as he's outside of it?
2Vaniver
I'm not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they're out of my future light cone whatever happens to them is a sunk cost- I don't see what I (or they) get from my preferring or believing things about them.
0Eugine_Nier
Yes, but you can affect what happens to them before they leave.
2Vaniver
Before they leave, their torture would be in my future light cone, right?
0Eugine_Nier
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don't intersect.
2Vaniver
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they're also dead to me (as they, or any information they emit, won't exist in my future). I still don't see what impact caring about them has.
0Eugine_Nier
Ok, my scenario involves your actions having an effect on them before your two light cones become disjoint.
2Vaniver
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you're interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I'm dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they're in my future light cone, and it's meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
5Will_Newsome
The best illustration I've seen thus far is this one. (Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it's usually really bad: proffered explanations are variations on "memetic selection pressures", "confirmation bias", or other fully general "explanations"/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others' beliefs is apparent in posts like "Our Phyg Is Not Exclusive Enough".)
6Vaniver
I agree that that post is the sort of thing that I want more of on LW. It seems to me like Steve_Rayhawk's comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you're talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it's necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate. I don't think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I'm much more sympathetic to the view that rationalizations can use the "beliefs are anticipation" argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don't think that implies that "beliefs are anticipation" is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
2Will_Newsome
You seem to be modeling the AGW disputant's decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having "actual belief about AGW" as a latent node that isn't introspectively accessible. That's surely the case sometimes, but I don't think that's usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I'm not sure it's wise to use "belief" to refer to only the (in many cases unidentifiable) "actual anticipation" part of decision policies, either for others or ourselves, especially when we don't have enough time to be abnormally reflective about the causes and purposes of others'/our "beliefs". (Areas where such caution isn't as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people's policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn't work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.) Another more theoretical reason I encourage caution about the "belief as anticipation" idea is that I don't think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function
1Vaniver
I'm describing it that way but I don't think the introspection is necessary- it's just easier to talk about as if he had full access to his mind. (Private beliefs don't have to be beliefs that the mind's narrator has access to, and oftentimes are kept out of its reach for security purposes!) I don't think I've seen any Bayesian modeling of that sort of thing, but I haven't gone looking for it. Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it's hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn't have a person traverse them unaided.) If you wanted to code a narrow AI that determined someone's mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach. Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don't see analysis on the level of Steve_Rayhawk's post coming out of a computer-run Bayes net anytime soon, and I don't think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we've got pretty sophisticated dedicated hardware for very similar things. Hmm. I'm going to need to sleep on this, but this sort of coordination still smells to me like anticipation. (A general comment: this conversation has moved me towards thinking that it's useful for the LW norm to be tabooing "belief" and using "anticipation" instead when appropriate, rather than trying to equate the two terms. I don't know if you're advocating for tabooing "belief", though.)
0Will_Newsome
(Complement to my other reply: You might not have seen this comment, where I suggest "knowledge" as a better descriptor than "belief" in most mundane settings. (Also I suspect that people's uses of the words "think" versus "believe" are correlated with introspectively distinct kinds of uncertainty.))
0[anonymous]
Beliefs about primordial cows, etc. Most people's beliefs. He's talking descriptively, not normatively.
2Vaniver
Don't my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species? I think "most people's beliefs" fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the 'belief=anticipation' approach is that it helps resist compartmentalization, which is generally positive.
9[anonymous]
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's. After I read it I was like, "Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it's OK that it's not supernatural or 'objective', and we don't have to 'justify' it to an ideal philosophy student of perfect emptyness". Fake utility functions, and Recursive justification stuff helped. Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.

Hm. I think I'll put on my project list "reread the metaethics sequence and create an intelligent reply." If that happens, it'll be at least two months out.

0[anonymous]
I look forward to that.

Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.

Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?

He seems to make a conclusion, but I don't believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.

5Eugine_Nier
Try actually applying it to some real life situations and you'll quickly discover the problems with it.
2orthonormal
There's a difference between a metaethics and an ethical theory. The metaethics sequence is supposed to help dissolve the false dichotomy "either there's a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right". It's not immediately supposed to solve "So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?" For the second question, we'd want to add an Ethics Sequence (in my opinion, Yvain's Consquentialism FAQ lays some good groundwork for one).
1[anonymous]
such as?

Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.

3[anonymous]
I struggled with that myself, but then figured out a rather nice quantitative solution. Eliezer's stuff doesn't say much about that topic, but that doesn't mean it fails at it.
3Eugine_Nier
I don't think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
2[anonymous]
You mean that it's not something that I could use to write an explicit utility function? Of course. Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.
1wedrifid
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
3[anonymous]
What would you like covered? Or is it just that vague "this isn't enough" feeling?
6wedrifid
I can't fully remember - it's been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A 'preferences are subjectively objective' post. A post that explains more completely what he means by 'should' (he has discussed and argued about this in comments).
0whowhowho
It's much worse than that. Nobody on LW seems to be able to understand it at all. Nah. Subjectivism. Euthyphro.
3wedrifid
Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.
2thomblake
That is an excellent point.
-3Will_Newsome
(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don't yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))
0gjm
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren't already convinced? (It seems like there could be non-question-begging reasons to think that -- e.g., it might turn out that people who've read and understood it quite commonly end up agreeing with you about God.)
-5Will_Newsome
1wedrifid
Point taken. There is certainly a lack along those lines.
8[anonymous]
Thanks. I was going to include something along those lines, but then I didn't. But really, if you haven't read the sequences, and don't care to, the only thing that seperates LW from r/atheism, rationalwiki, whatever that place is you linked to, and so on is that a lot of people here have read the sequences, which isn't a fair reason to hang out here.
4Manfred
My recent post explains how to get true beliefs in situations like the anthropic trilemma, which post begins with the words "speaking of problems I don't know how to solve." However, there is a bit of a remaining problem, since I don't know how to model the wrong way of doing things (naive application of Bayes' rule to questionable interpretations). well enough to tell whether it's fixable or not, so although the problem is solved, it is not dissolved.
0Grognor
I quietly downvoted your post when you made it for its annoying style and because I didn't think it really solved any problems, just asserted that it did.
6Manfred
What could I do to improve the style of my writing?
4Bugmaster
How do you measure "progress", exactly ? I'm not sure what the word means in this context.
0David_Gerard
Yes, this needs clarification. Is it "I like it better/I don't like it better" or something a third party can at least see?
3faul_sname
Where, specifically, do you not see progress? I see much better recognition of, say, regression to the mean here than in the general population, despite it never being covered in the sequences.
-1Grognor
This is a very interesting question. I cannot cite a lack of something. But maybe what I'm saying here will be obvious-in-retrospect if I put it like this: This post is terrible. But some of the comments pointing out its mistakes are great. On the other hand, it's easier to point out the mistakes in other people's posts than to be right yourself. Where are the new posts saying thunderously correct things, rather than mediocre posts with great comments pointing out what's wrong with them?
8David_Gerard
That terrible post is hardly an example of a newbie problem - it's clearly a one-off by someone who read one post and isn't interested in anything else about the site, but was sufficiently angry to create a login and post. That is, it's genuine feedback from the outside world. As such, trying really hard to eliminate this sort of post strikes me as something you should be cautious about. Also, such posts are rare.
2Multiheaded
I'm insulted (not in an emotional way! I just want to state my strong personal objection!). Many of us challenge the notion of "progress" being possible or even desirable on topics like Torture vs Specks. And while I've still much to learn, there are people like Konkvistador, who's IMO quite adept at resisting the lure of naive utilitarianism and can put a "small-c conservative" (meaning not ideologically conservative, but technically so) approach to metaethics to good use.
0[anonymous]
Oh I would agree progress here is questionable, but I agree with Grognor in the sense of LessWrong, at least in top level posts isn't as intellectually productive as it could be. Worse it seems to be a rather closed thing, unwilling to update on information from outside.
-1Alsadius
Demanding that people read tomes of text before you're willing to talk to them seems about the easiest way imaginable to silence any possible dissent. Anyone who disagrees with you won't bother to read your holy books, and anyone who hasn't will be peremptorily ignored. You're engaging in a pretty basic logical fallacy in an attempt to preserve rationality. Engage the argument, not the arguer.
[-][anonymous]110

Expecting your interlocutors to have a passing familiarity with the subject under discussion is not a logical fallacy.

-3Alsadius
There's ways to have a passing familiarity with rational debate that don't involve reading a million words of Eliezer Yudkowsky's writings.
1[anonymous]
That has nothing to do with whether or not something you believe to be a logical fallacy is or is not.
-7Alsadius
-10Alsadius
[-]Shmi60

I think you want it more tiered/topic'ed, not more exclusive, which I would certainly support. Unfortunately, the site design is not a priority.

1[anonymous]
Yeah. Having seperate "elite rationality club" and "casual rationality discussion" areas is probably my prefered solution. Too bad everyone who cares doesn't care enough to hack the code. How hard would it be for someone to create and send in a patch for something like exclusive back room discussion. It would be just adding another subreddit, no?

/r/evenlesswrong

I've seen this suggested before, and while it would have positive aspects, from a PR perspective, it would be an utter nightmare. I've been here for slightly less than a year, after being referred to HPMOR. I am very unlikely (Prior p = 0.02, given that EY started it and I was obsessed with HPMOR, probably closer to p = 0.07) to have ever followed a forum/blog that had an "exclusive members" section. Insomuch as LW is interested in recruiting potential rationalists, this is a horrible, horrible idea.

4Randaly
A more realistic idea (what I think the grandparent was suggesting) it just to try to filter off discussions not strictly related to rationality (HPMOR; fiction threads; the AGI/SIAI discussions; etc) into the discussion forum, and to stick stuff strictly related to rationality (or relevant paper-sharing, or whatever) in another subreddit.
1John_Maxwell
The obvious alternative is to create a tier below discussion, which would attract some of the lower-quality discussion posts and thereby improve the signal in the main discussion section. Or topical discussion boards...
1[anonymous]
good point. Would you prefer that we be a bit more hardcore about subscribing sequences, and a bit more explicit that people should read them? Or maybe we should have little markers next to people's names "this guy has read everything"? Or maybe we should do nothing because all moves are bad? Maybe a more loosely affilated site that has the strict standard instead of just a "non-noobs" section?
5RobertLumley
In short, no. See my other comment for details. I think the barriers to entry are high enough, and raising them further filters out people we might want. This introduces status problems, it's impossible (or at least inefficient) to enforce well. I won't claim that the current design we've located in LessWrongspace is the most optimal, but I'm quite happy with it, and I don't see any way to immediately improve it in the regard you want. Actually, I'll take that back. I would like to see the community encourage the use of tags a lot more. I think if everyone was very dedicated in trying to use the tagging system it might help the problem you're referring to. But in some way, I think those tags also need to be incorporated into titles of discussion posts. I really like the custom of using [META] or [LINK] in the title, and I'd like to see that expand. Again no, really for the same reasons as above.
5Emile
If that happened, unless it was very carefully presented, I would expect a drop of quality in the discussion section because "hey cool down man we aren't in the restricted section here", and because many old-timers might stop spending time there altogether. I would rather see a solution that didn't include such a neat division; where lower standards were treated as the exception (like in HPMoR threads and open threads), not the rule.
2David_Gerard
The approach of seeding another forum or few may help, c.f. the Center for Modern Rationality.
1Alsadius
If someone's going to alter this forum software, how about they put a priority on giving me a list of replies to my posts, so I don't have to re-scan long comment threads regularly in order to carry on a discussion? Also, making long threads actually load when I hit "Show all" would be nice too.
1pedanterrific
I'm not sure what you mean here- how would this differ from the inbox? Definitely.
5TheOtherDave
Is there a way to get the inbox to show responses to a post, as well as to comments? That's not the default behavior.
4komponisto
This would be an excellent improvement to the site, and would solve the problem of people (cough lukeprog cough) not reading the comments on their posts.
0TheOtherDave
Just to be clear, I'm not requesting it, I just thought pedanterrific was indicating that it was current behavior.
3Vladimir_Nesov
It doesn't work with LW's inbox, but you can subscribe to threads' RSS feeds.
0pedanterrific
Oh. (Have never made a post.)
2Alsadius
That feature exists? Where?
2pedanterrific
Um. I was referring to the little envelope symbol underneath your karma score.
0Alsadius
I hadn't noticed that before. Thank you. Edit: Though it's seriously annoying to not have the ability to reply from there, or even link directly to it. I'm actually pining for Disqus now, and that's a first.

I know you say that you don't want to end up with "ignore any discussion that contains contradictions of the lesswrong scriptures", but it sounds a bit like that. (In particular, referring to stuff like "properly sequenced LWers" suggests to me that you not only think that the sequences are interesting, but actually right about everything). The sequences are not scripture, and I think (hope!) there are a lot of LWers who disagree to a greater or lesser degree with them.

For example, I think the metaethics sequence is pretty hopeless (WA... (read more)

0somervta
Eliezer considers the metaethics sequences to be a failed explanation, something most people who have read it agree with, so you're not alone.

What if users were expected to have a passing familiarity with the topics the sequences covered, but not necessarily to have read them? That way, if they were going to post about one of the topics covered in the sequences, they could be sure to brush up on the state of the debate first.

4[anonymous]
If you've found some substantially easier way to become reasonably competent -- i.e., possessing a saving throw vs. failing at thinking about thinking -- in a way that doesn't require reading a substantial fraction of the sequences, you're remiss for not describing such a path publicly.

I would guess that hanging out with friends who are aspiring rationalists is a faster way to become rational than reading the sequences.

In any case, it seems pretty clear to me that the sequences do not have a monopoly on rationality. Eliezer isn't the only person in the world who's good at thinking about his thinking.

FWIW, I was thinking along the lines of only requesting passing familiarity with non-core sequences.

3jsteinhardt
I read A Human's Guide to Words and Reductionism, and a little bit of the rest. I at least feel like I have pretty good familiarity with the rest of the topics covered as a result of having a strong technical background. The path is pretty clear, though perhaps harder to take --- just take college-level classes in mathematics, econ, and physics, and think a lot about the material. And talk to other smart people.

Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner.

I haven't read most of the sequences yet and agree with most of what those lw members are saying of who you'd like to see more of.

Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Doug... (read more)

5Wei Dai
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn't incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
5[anonymous]
That post is part of the reason I made this post. Shit like this from the OP there: !!! I don't expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.

Um, you seem to me to be saying that someone (davidad) who is in fact familiar with the sequences, and who left AI to achieve things well past most of LW's participants, are a perfect example of who you don't want here. Is that really what you meant to put across?

0XiXiDu
Can you provide some examples of interesting disagreement with the LW materials that was acknowledged as such by those who wrote the content or believe that it is correct?
1Rain
My default stance has always been that people disagree, even when informed, otherwise I'd have a lot more organizations and communities to choose from, and there'd be no way I could make it into SI's top donor list.

When EY are writing the sequences what percentage of population he was hoping to influence? I suppose a lot. Then now some people are bothered because the message began to spread and in the meantime the quality of posts are not the same. Well, if the discussion become poor, go to another places. High technical guys simple don't get involve in something they see is hopeless or not interesting, like trying to turn people more rational or reduce x-risks.

First they came for the professional philosophers,

and I didn't speak out because I wasn't a professional philosopher.

Then they came for the frequentists,

and I didn't speak out because I wasn't a frequentist.

Then they came for the AI skeptics,

and I didn't speak out because I wasn't skeptical of AI.

and then there was no one left to talk to.

[-][anonymous]00

"These guys are cultish and they know it, as evidenced by the fact that they're censoring the word 'cult' on their site"

[This comment is no longer endorsed by its author]Reply