Meta-meta point: the fact that you used your own post as an example, rather than a post where you weren't involved, made a very large difference in how I engaged with this Moderating LessWrong article. It prevented me from finishing it. Specifically, when I got to the point where you quoted Benquo, I felt honor-bound to stop reading and go check the source material, which turned out to be voluminous and ambiguous enough that I ran out of time before I came back. This is because, long ago, I made it a bright-line rule to never read someone quoting someone they disagree with in a potentially-hostile manner without checking the original context. That wouldn'tve felt necessary if the examples were drawn from discussions where you weren't involved.
(Meta, in case it's relevant to anyone: it's felt to me like LW has been deluged in the last few weeks by people saying things that seem very clearly wrong to me (this is not at all specific to this post / discussion), but in such a way that it would take a lot of effort on my part to explain clearly what seems wrong to me in most of the cases. I'm not willing to spend the time necessary to address every such thing or even most of them, but I also don't have a clear sense of how to prioritize, so I approximately haven't been commenting at all as a result because it just feels exhausting. I'm making an exception for this post more or less on a whim.)
There's a lot I like about this post, and I agree that a lot of benquo's comments were problematic. That said, when I put on my circling hat, it seems pretty clear to me that benquo was speaking from a place of being triggered, and I would have confidently predicted that the conversation would not have improved unless this was addressed in some way. I have some sense of how I would address this in person but less of a sense of how to reasonably address it on LW.
There's something in Duncan...
Endorsed as well, although I think I might have a disagreement re: how reasonable it is for the site to expect people to conform to certain very concrete and grokkable standards even while triggered. I would claim, for instance, that even in the Dragon Army posts, where I was definitely triggered, you can't find any examples of egregious or outright violations of the principles laid out in this essay, and that fewer than 15% of my comments would contain even moderate, marginal violations of them.
I note that this is a prediction that sticks its neck out, if anybody wants to try. I have PDFs of both threads, including all comments, that I share with anyone who requests them.
I have PDFs of both threads, including all comments, that I share with anyone who requests them.
I am curious about the trivial inconvenience, here; why not just share the PDFs with a link, instead of requiring people to ask you for them first?
Highly endorse this. And this is fact might be the entirety of my crux of disagreement.
(expressing wariness about delving into the details here. I am not willing to delve into details that focus on the recent Benquo thread until I've had a chance to talk to Ben in more detail. Interested in diving into details re: past discussions I've had with Duncan, but would probably prefer to do that in a non-public setting because the nature-of-the-thing makes it harder)
[edit: I think I have cruxes separate from this, but they might be similar/entangled]
There has to be some way to add the hypothesis that someone is triggered into the conversation. Like, sure, maybe the given example doesn't cut it, and maybe it's hard/tricky/subtle/we won't get it on the first five tries/no one solution will fit all situations. And maybe you're pointing at something like, this isn't really a hypothesis, but is an assertion clothed so as to be sneakily defensible.
But people do get triggered, and LessWrong has got to be the kind of place where that, itself, can be taken as object—if not by the person who's currently in the middle of a triggered state, then at least by the people around them.
There has to be some way to add the hypothesis that someone is triggered into the conversation. … But people do get triggered, and LessWrong has got to be the kind of place where that, itself, can be taken as object—if not by the person who’s currently in the middle of a triggered state, then at least by the people around them.
I don’t agree with this at all. In fact, I’d say precisely the opposite: Less Wrong has got to be exactly the kind of place where “someone is triggered” should not be added into the conversation—neither by the “triggered” person, nor by others.
My emotional state is my own business. If we are having a conversation on Less Wrong, and I do something which violates some norm, by all means confront me about that violation. I will not use being “triggered” as an excuse, a justification, or even a thing that you have any obligation at all to consider; in return, you will not psychoanalyze me and ask things like whether I am “triggered”. That is—or, rather, that absolutely should be—the social contract.
On Less Wrong (or any similar forum), the interface that we implement is “person who does not, in the context of a conversation/debate/discussion/argument, have any
...Maybe this "social contract" is a fine thing for LessWrong to uphold.
But rationalists should not uphold it, in all places, at all times. In fact, in places where active truth-seeking occurs, this contract should be deliberately (consensually) dropped.
Double Cruxing often involves showing each other the implementation details. I open up my compartments and show you their inner workings. This means sharing emotional states and reactions. My cruxes are here, here, and here, and they're not straightforward, System-2-based propositions. They are fuzzy and emotionally-laden expectations, movies in my head, urges in my body, visceral taste reactions.
The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beli...
Double Cruxing often involves showing each other the implementation details.
Then chalk up another reason to disdain Double Cruxing.
If you don’t want to open up your implementation details to me, that is cool. But we’re not going to go to the depths of truth-seeking together without it.
This (“go to the depths of truth-seeking together”) is certainly an attitude that I would not like to see become prevalent on Less Wrong.
(Noting disagreement with your position here, and willingness to expand on that at some other point in time when there are not 5 discussions going on on LW that I feel like I want to participate in more. Rough model outline is something like: "I think close friendships and other environments with high barriers to entry can indeed strongly benefit from people modeling each other's implementation details. Most of the time when environments with a lower standard of selection or trust try to do this, it ends badly, though it's not obvious to me that they always have to go badly, or whether there are hypothetical procedures or norms that allow this conversation to go well even in lower-trust environments, though I haven't yet seen such a set of norms robustly in action.")
Something that I guess I've never quite gotten is, in your view Said, what is Less Wrong for? In 20 years if everything on LW went exactly the way you think is ideal, what are the good things that would have happened along the way, and how would we know that we made the right call?
(I have my own answers to this, which I think I've explained before but if I haven't done a clear enough job, I can try to spell them out)
That’s a good question, but a tricky one to answer directly. I hope you’ll forgive me if (à la Turing) I substitute a different one, which I think sheds light on the one you asked:
In the time Less Wrong has existed, what has come out of it, what has happened as a result, that is good and positive; and, contrariwise, what has happened that is unfortunate or undesirable?
Here’s my list, which I expect does not match anyone else’s in every detail, but the broad outlines of which seem to me to be clearly reasonable. These lists are in no particular order, and include great and small things alike:
Pros
4. The recruitment of talented mathematicians/etc. to MIRI, and their resulting work on the alignment problem and related topics
5. The elevation of the AI alignment problem into mainstream consciousness
Cons
1. 2. rationalist communities
8. Almost everything that CFAR does and has done
9. The Effective Altruism movement
Just want to note that I think you may be underestimating the extent to which these things on your Cons list have contributed to these things on your Pros list.
For example:
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
For instance, if you see people acting to work on/improve/increase the cons... would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?
(This is just in the hypothetical world where this is true. I do not know if it is.)
Like, what if we just live in a "tragic world" where you can't achieve things like your pros list without... basically feeding people's desire for community and connection? And what if people's desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?
(If my hypothetical does nothing, then could you come up with a hypothetical that does?)
Confused and curious about why you put Kocherga in positives and all other rationalist social/community/meatspace things in negatives. I don't think the difference between the two is that large. (I'm a Bay Area rationalist-type person who has been to a couple of things at Kocherga)
FWIW, I appreciated Said giving a response that was a succinct but comprehensive answer – I think further details might make sense as a top-level post but would probably take this thread in too many different directions. I think there's something useful for people with really different worldviews being able to do a quick exchange of the high level stuff without immediately diving into the weeds.
“If it’s posting like a LWer, and replying like a LWer, and acting like a LWer, then it’s a LWer; doesn’t matter what its internal state is.” I’d be willing to give up a small swath of conversational space, to purchase that.
Indeed.
I still don’t like the idea of having a particular set of hypotheses being taboo; I can buy an instrumental argument that we might want to make an exception around triggeredness that’s similar to the exceptions around positing that someone might have a lot of unacknowledged racist biases—
Exactly. We can make it even more stark:
“Have you considered that maybe you only think that because you’re just really stupid? What’s your IQ?”
“Have you considered that maybe you’re a really terrible person and a sociopath or maybe just evil?”
[to a woman] “You seem angry, is it that time of the month for you?”
etc.
We don’t say these sorts of things. Any of them might be true. But we don’t say them, because even if they are true, it’s none of our business. Really, the only hypothesis that needs to be examined for “why person X is saying thing Y” is “they think that it’s a good idea to say thing Y”.
Note that this is a very broad class of hypotheses. It’s much broader,...
Well, empirically, we also say the stuff about being triggered. I’m saying that we shouldn’t say either sort of thing.
A norm (even a temporary one) in which you can do that, but I can't ask for evidence, seems like it ends up allowing whichever of us is more interested in the exercise to snipe at the other unchallenged pretty much indefinitely.
To be clear on my view (as a mod), it is fine for you to ask for evidence (note that habryka did as well, earlier), and also fine for Duncan to disengage. I suspect that the world where he disengages is better than the one where he responds, primarily because it seems to me like handling things in a de-escalatory way often requires not settling smaller issues until more fundamental ones are addressed.
I do note some unpleasantness here around the question of who gets "the last word" before things are handled a different way, where any call to change methods while a particular person is "up" is like that person attempting to score a point, and I frown on people making attempts to score points if they expect the type of conversation to change shortly.
As a last point, the word "indefinitely" stuck out to me because of the combination with "temporary" earlier, and I note that the party who is more interested in repeatedly doing the 'disengage until facilitated conversation' move is also opening themselves up to sniping in this way.
There's a set of moderation-challenges that the post doesn't delve into, which are the ones I struggle most with – I don't have a clear model of what it'd mean to solve these, whereas the challenges pointed to in the OP seem comprehensible, just hard. I'm interested in thoughts on this.
1. Difficulty with moderating just-over-the-line comments, or non-legibly-over-the-line comments
The most common pattern I run into, where I'm not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.
The two underlying generators I'm pointing at here seem to be:
The most common pattern I run into, where I’m not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.
What I used to do on the #lesswrong IRC was put every time I see someone make a comment like this into a journal, and then once I find myself really annoyed with them I open the journal to help establish the pattern. I'd also look at peoples individual chat history to see if there's a consistent pattern of them doing the thing routinely, or if it's a thing they just sometimes happen to do.
I definitely agree this is one of the hardest challenges of moderation, and I pretty much always see folks fail it. IMO, it's actually more important than dealing with the egregious violations, since those are usually fairly legible and just require having a spine.
My most important advice would be don't ignore it. Do not just shrug it off and say "well nothing I can do, it's not like I can tell someone off for being annoying". You most certainly can and should for many kinds of 'annoying'. The alternative is that the vigor of a space slowly gets sucked out by not-quite-bad-actors.
Not actually approaching a discussion collaboratively.
Not being up-to-speed enough to contribute to a discussion.
Yeah, these are two of the things that have been turning me off from trying to keep up with comments the most. I don't really have any ideas short of incredibly aggressive moderation under a much higher bar for comments and users than has been set so far.
This article gave me a bunch of food for thought. I don't think it addresses my main cruxes re: previous disagreements I've had with Duncan, but it definitely gave me some new ideas and new vantage points to view old ones.
(Note 1: I won't be commenting on Duncan's comments on Benquo's comments because I'm still in the process with chatting with Benquo about it. I have a number of relevant disagreements with both Ben and Duncan, hope to resolve those disagreements at some point, but meanwhile don't have the bandwidth I'd require to engage with both of them at once)
Some thoughts so far:
1. Hierarchy of Goals
The hierarchy of "purposes of LessWrong" that Duncan describes is roughly the same one I'd describe. A concern or difference in framing I have here is that several the of the stages reinforce each other in a cyclical fashion.
I'm not quite sure you can cleanly prioritize truth over truthseeking culture.
If our culture isn't outputting useful accumulation of knowledge, then it's failing at our core mission. Definitely. But in the situations where truthseeking-culture vs truth seem to be in conflict, I think it...
To me it seems there is a certain tension between In Defense of Punch Bug and this post.
As I understand it, while "In defense of Punch Bang" in some parts argues people should not spend huge amount of attention on basically random noise, this calls for very high attention to detail on the part of moderators, like
"be attentive enough to be the one to catch the slipped-in hidden assumption in the middle of the giant paragraph, and to point out the uncharitable summary even when it’s carefully and politely phrased, and to follow the subthread all the way down to its twentieth reply".
This sounds surprisingly similar as a call why people should be diligently watching for microaggressions: so I would just point to the In Defense of Punch Bang for a reasonable counterargument.
Hm. I am in favor of high standards of discourse but I am remarkably resistant to Duncan imposing his high standards of discourse because he has remarkably different social/discourse intuitions from me.
I think it's helpful to have some sort of system to make sure that every comment gets read, but I think the ownership checkbox is potentially a bad way to do it. I'm mostly thinking of the incentives for moderators, here; it seems highly plausible that someone comes across a comment that feels off but they don't really know how to handle it; this means they don't want to say "yep, this one's mine" (because they don't want to handle it) but also not checking it is wrong somehow.
One of the things that I had considered when proposing the Sunshine Regiment was a 'report' button on every comment, available to all users, which was basically a "something about this should be handled by someone with more time and energy"--not necessarily "this post should get removed" but "oh man, I really want to see how Vaniver would respond to this comment," or something.
I also suspect there's something like Stack Exchange's edit queue that could be good, where several of the important pieces are 1) multiple eyes on any particular thing and 2) tracking how much people have eyes on things and 3) tracking when people's judgments disagree.
I think one of the main things I updated on is that a model I had of user discussions with users (that it's important to hug the most important part of the query, i.e. chasing and focusing on cruxes) is not applicable to mod actions (where it's important to catch each of the norm violations, instead of simply the most serious violation).
There are a handful of reasons for this, the most serious of which is the "that which is not punished is allowed"--Raemon made a comment on the Benquo's top comment that pointed out what Raemon saw as a defect in Benquo's presentation, and separated that from a criticism of Benquo's argument. But that meant that later, when I criticized the argument, Benquo responded with 'but Raemon wasn't criticizing my argument.' If I only point out what seems to me like the most serious error in a post, then there's a (reasonable!) argument that all the things that went unsaid weren't at the threshold of correction, and if I only point out the error that's easier to respond to, the same sort of thing goes through.
I really like when people put effort into providing alternatives-to/critiques-of how I work, so thanks for this.
It doesn’t mean I can always reply promptly, alas. Right now I’m in the process of taking all my finals (so I can get a visa and move to the Bay), and this will continue for a few weeks :(. Oli’s just coming back from the same, while Ray’s holding the fort.
It’s also regularly the case that doing detailed public write-ups of considerations around a decision/approach isn’t the right use of effort relative to just *building the product*, and that ap...
This seems like it's pointing at a good thing. As a data point, the proposed responses to my comments would all have seemed friendly and helpful to me, and I'd have had an easy time engaging with the criticism.
His draft response to my VW comment would probably have motivated me to add a note to my initial comment deprecating that reference, or edit it out entirely (with a note clarifying that an edit had been made).
The asymmetry comment he pointed to as helpful (thanks, Vaniver!) actually would have been helpful, if SilentCal hadn't already taken the initiative (thanks, SilentCal!) and clarified what he took the term to mean.
I'm concerned that the described examples of holding individual comments to high epistemic standards don't seem to necessarily apply to top-level posts, or linked content- one reason I think this is bad is that it is hard to precisely critique something which is not in itself precise, or which contains metaphor, or which contains example-but-actually-pointing-at-a-class writing where the class can be construed in various different ways.
Critique of fuzzy intuitions and impressions and feelings often involves fuzzy intuitions and impressions and fe...
There is a potential discussion to be had someday about whether upvotes and downvotes themselves can be considered to be "in error" given the specific milieu of LW, or whether upvotes and downvotes are sacred à la free speech. Looking at votes on this and other threads, I have frequently had the sense that people were Doing It (Objectively) Wrong, not fundamentally as in wrong-opinions, but culturally as in participation-in-this-culture-means-precommitting-to-supporting-X-and-othering-Y.
I'm aware that there is a strongly-felt libertarian ar...
In short, what makes LessWrong different from everywhere else on the internet is that it’s a place where truth comes first,
Because all those science, logic, maths and philosophy forums are just full of lies.
FYI, AFAICT I am the only person who downvoted it (bringing it from 3 to -3). As long as we're having the meta-est of conversations, I wanted to talk about why.
The single most common criticism I hear from good writers on LW, is that the comments feel nitpicky in a way that isn't actually helpful, and that you have to defend your points against a hostile-seeming crowd rather than collaboratively building something.
This comment seemed to be doing two things in that direction: a) it was bit aggressive as you noted, b) the point it was making just... didn't seem very relevant. Yes, there are places on the internet aspiring to similar things as LW. But a reasonable read on your statement was "most of the internet isn't trying to do this much at all, LW is different." While some humility is nice, I really don't understand your point better now that you've rephrased it to the 98th percentile thing.
So the main impact of this sort of comment is both to spend time on things that don't matter much, and increase the latent hostility of the thread, and I think people don't appreciate enough how bad that is for discourse. Both of that seem like something to me that is better silently downvoted than engaged with.
Not only are there places in the internet aspiring to find the truth, there are, in fact, very few places that are not aspiring to find it.
?? If you look at the Alexa top 50 sites for the US, how many of them are about aspiring to find the truth? I count between 3 and 4 (Google, Wikipedia, and Bing for sure, Wikia maybe).
Another strong post from Duncan Sabien (aka Conor_Moreton).