I find it unpleasant that you always bring your hobbyhorse in, but in an "abstract" way that doesn't allow discussing the actual object level question. It makes me feel attacked in a way that allows for no legal recourse to defend myself.
[Written as an admin]
First and foremost, LW is a space for intellectual progress about rationality and related topics. Currently, we don't ban people for being fixated on a topic, or 'darkly hinting,' or posts they make off-site, and I don't think we should. We do keep a careful eye on such people, and interpret behavior in 'grey areas' accordingly, in a way that I think reflects both good Bayesianism and good moderation practice.
In my favorite world, people who disagree on object-level questions (both political and non-political) can nevertheless civilly discuss abstract issues. This favors asymmetric weapons and is a core component of truth-seeking. So, while hurt feelings and finding things unpleasant are legitimate and it's worth spending effort optimizing to prevent them, we can't give them that much weight unless they differentiate the true and the untrue.
That said, there are ways to bring up true things that as a whole move people away from the truth, and you might be worried about agreements on abstractions being twisted to force agreement on object-level issues. These are hard to fight, and frustrating if you see them and others don...
That is very reasonable and fair. I think that in practice I won't write such a compilation post any time soon, because (i) I already created too much drama, (ii) I don't enjoy writing call-out posts and (iii) my time is much better spent working on AI alignment.
Upon reflection, my strong reaction was probably because my System 1 is designed to deal with Dunbar-number-size groups. In such a tribe, one voice with an agenda which, if implemented, would put me in physical danger, is already notable risk. However, in a civilization of millions the significance of one such voice is microscopic (unless it's very exceptional in its charisma or otherwise). On the other hand, AGI is a serious risk, and it's one that I'm much better equipped to affect.
Sorry for causing all this trouble! Hopefully putting this analysis here in public will help me to stay focused in the future :)
one voice with an agenda which, if implemented, would put me in physical danger
Okay, I think I have a right to respond to this.
People being in physical danger is a bad thing. I don't think of myself as having a lot of strong political beliefs, but I'm going to take a definite stand here: I am against people being in physical danger.
If someone were to present me with a persuasive argument that my writing elsewhere is increasing the number of physical-danger observer-moments in the multiverse on net, then I would seriously consider revising or retracting some of it! But I'm not aware of any such argument.
For what it’s worth, it seems to me that the argument “this writing puts me in physical danger” is absurd as applied to this particular case.
However, as far as I can tell (having re-read her comment several times), that’s not quite the argument Vanessa was making. What she seems to be saying is not “this writing puts me in physical danger” but “this writing expresses a viewpoint whose preferred agenda, if implemented, puts me in physical danger”.
Now, it’s a somewhat subtle distinction. Nevertheless, there does seem to be a difference; and insofar as there is a difference, the latter argument is (it seems to me) rather worse and more dangerous.
Why? Well, firstly, because upon a casual reading it can easily appear to be the first argument—as we just saw. This, of course, sets up a perfect motte-and-bailey situation: if people read the argument, come away being convinced of the former point, but then someone challenges the argument’s author, they can protest that what they actually wrote was the latter (and of course that’ll be true).
This is itself is not blameworthy per se (though it suggests that anyone making the latter point ought to be quite careful and explicit in distinguishing
...I think that what you’re saying here is mostly right, but I feel like it leaves out an important facet of the problem.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country’s border.
The situation is often asymmetrical in particular contexts - given existing power structures & official narratives, some such meta-attacks are easier to perform than others - and in particular, proposals to alter the official narrative can look more “political” than moves in the opposite direction, even when the official narrative is obviously not a reasonable prior.
This problem is aggravated by a norm of avoiding “political” discourse - if one side of an argument is construed as political and the other isn’t, we get a biased result that favors & intensifies existing power arrangements. It’s also aggravated by norms of calm, impersonal discourse, since that’s easier to perform if you feel safe.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
This is true; indeed, it’s difficult to see how it can fail to be true, even in the absence of any awareness or intention on anyone’s part. Yet it seems an exceedingly abstract basis on which to consider even censuring or discouraging certain sorts of speech, much less punishing or banning it.
I agree. I think this makes discouraging political or heated speech hard to do without introducing substantively harmful bias. That’s the context in which Zack’s speech can create a problem for Vanessa (and in which others’ speech created a structurally similar problem for Zack!).
Well, as for “heated” speech, I think discouraging that is easy enough. But where “political” is concerned, my point is exactly that the perspective you take makes it difficult to see where “political” ends, and “non-political” begins—indeed, it does not seem to me to be difficult to start from that view, and construct an argument that all speech is “political”! (And if I understand Zack’s point correctly, he seems to be saying that this has, in essence, already happened, on one particular topic.)
That's understandable, but I hope it's also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
Like, imagine an alternate universe where posts about the minimum description length principle were perceived as an attack on Christians (because atheists often argue that Occam's razor implies that theories about God are unnecessarily complex), and therefore somewhat unseemly (because politics is the mind-killer, and criticizing a popular religion has inextricable political consequences).
I can see how it would be really annoying if someone on your favorite rationality forum wrote a post about minimum description length, if you knew that their work about MDL was partially derived from other work (on a separate website, under a pseudonym) about atheism, and you happened to think that Occam's razor actually doesn't favor atheism.
Or maybe that analogy is going to be perceived as unfair because we live in a subculture that pattern-matches religion as "the bad guys" and atheism as the "good guys"? (I could try to protest, "But, but, you could
...I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things.
Zack didn't say this. What he said was:
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I'm pretty sure this is what Zack thinks).
Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...
Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase "contextualizing norms" is being used to conflate between "all sufficiently relevant information should be used" and "everything should be about politics".
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
The proper words for that aren't decoupling vs contextualizing, it's denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.
Note, the original post makes the conflation I'm complaining about here too!
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It's important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren't maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It's still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
To summarize: you're destroying the substrate. Stop it.
I think my actual concern with this line of argumentation is: if you have a norm of "If 'X' and 'X implies Y' then 'Y', EXCEPT when it's net bad to have concluded 'Y'", then the werewolves win.
The question of whether it's net bad to have concluded 'Y', is much, much more complicated than the question of whether, logically, 'Y' is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.
If we're having a discussion about X and Y, someone moves to propose 'Y' (because, as it has already been agreed, 'X' and 'X implies Y'), and then someone else says "no, we can't do that, that has negative consequences!", that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding 'Y' to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it's going to have to make logical infere
...If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding 'Y' to the discourse, in fact, has bad consequences.
I want to note that LW definitely has exceptions to this norm, if only because of the boring, normal exceptions. (If we would get in trouble with law enforcement for hosting something you might put on LW, don't put it on LW.) We've had in the works (for quite some time) a post explaining our position on less boring cases more clearly, but it runs into difficulty with the sort of issues that you discuss here; generally these questions are answered in private in a way that connects to the judgment calls being made and the particulars of the case, as opposed to through transparent principles that can be clearly understood and predicted in advance (in part because, to extend the analogy, this empowers the werewolves as well).
What specifically do you mean by "werewolf" here & how do you think it relates to the way Jessica was using it? I'm worried that we're getting close to just redefining it as a generic term for "enemies of the community."
That's not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of "social decisionmaking based on correct information"), as in the Werewolf party game.
Worth noting: "werewolf" as a jargon term strikes me as something that is inevitably going to get collapsed into "generic bad actor" over time, if it gets used a lot. I'm assuming that you're thinking of it sort of as in the "preformal" stage, where it doesn't make sense to over-optimize the terminology. But if you're going to keep using it I think it'd make sense to come up with a term that's somewhat more robust against getting interpreted that way.
(random default suggestion: "obfuscator". Other options I came up with required multiple words to get the point across and ended up too convoluted. There might be a fun shorthand for a type of animal or mythological figure that is a) a predator or parasite, b) relies on making things cloudy. So far I could just come up with "squid" due to ink jets, but it didn't really have the right connotations)
And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value
Um, in context, this sounds to me like you're arguing that by writing "Where to Draw the Boundaries?" and my secret ("secret") blog, I'm trying to get people to accept harmful situations? Am I interpreting you correctly? If so, can you explain in detail what specific harm you think is being done?
Thanks, these are some great points on some of the costs of decoupling norms! (As you've observed, I'm generally pretty strongly in favor of decoupling norms, but policy debates should not appear one-sided.)
someone brings it up all the time
I would want to distinguish "brings it up all the time" in the sense of "this user posts about this topic when it's not relevant" (which I agree is bad and warrants moderator action) versus the sense of "this user posts about this topic a lot, and not on other topics" (which I think is generally OK).
If someone is obsessively focused on their narrow special interest—let's say, algebraic topology—and occasionally comments specifically when they happen to think of an application of algebraic topology to the forum topic, I think that's fine, because people reading that particular thread get the benefit of a relevant algebraic topology application—even if looking at that user's posting history leaves one with an unsettling sense of, "Wow, this person is creepily obsessed with their hobbyhorse."
tries to twist other people's posts towards a discussion of their thing
I agree that this would be bad, but I think it's usually possible to distinguish
...Regarding "Kolmogorov complicity", I just want to make clear that I don't want to censor your opinion on the political question. Such censorship would only serve to justify your notion that "we only refuse to believe X because it's heresy, while any systematic truthseeker would believe X", which is something I very much disagree with. I might be interested in discussing the political question if we were allowed to do it. It is the double bind of, not being able to allowed to argue with you on the political quesiton while having to listen to you constantly hinting at it, is what bugging me. Then again, I don't really have a good solution.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such "financial ties". Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don't fully get why it would be troublesome to reveal this belief.
… a “sinister context”?!
I am, frankly, appalled to read this sort of thing on Less Wrong. You are, in all seriousness, attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged “sinister context” that you are bringing in from somewhere else. To call this “not a fair objection” would be a gross understatement. It is shameful.
If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
Absolutely not.
This sort of attitude is tremendously corrosive to productive discussion and genuine truth-seeking. We have discussed this before… and am I genuinely disappointed that this sort of thing is happening again.
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed?
As discussed in the linked thread—it is none of my business, nor the business of any of your interlocutors, whether you are, or are not, a “perfectly dispassionate robot in a vacuum”, when it comes to discussions on subjects like the OP. That is not something which should enter into the discussion at all; it is simply off-topic.
If we permit the introduction of such questions as whether you feel uncomfortable (about the topic, or any on-topic claims) into discussions of abstract epistemology, or Bayesian inference, or logic, etc., when that discomfort in no way bears on the truth or falsity of the claims under discussion, then we might as well close up shop, because at that point, we have bid good-bye even to the pretense of “rationality”, much less the fact of it.
And if the “predominant opinion” disagrees—so much the worse for predominant opinion; and so much the sadder for Less Wrong.
Edit: And all this is, of course, not even mentioning your conflation of “I am uncomfortable” with insinuating comments about “sinister context”, and implications of wrongdoing on Zack’s part!
Alright, let's suppose it's off-topic in this thread, or even on this forum. But is there another place within the community's "discussion space" where it is on-topic? Or you don't think such a place should exist at all?
I just want to chime in quickly to say that I disagree with Said here pretty heavily, but also don't know that I agree with any other single person in the conversation, and articulating what I actually believe would require more time than I have right now.
Generally, if you want to talk about how LW is moderated or unpleasant behavior happening here, you should talk to me. [If you think I'm making mistakes, the person to talk to is probably Habryka.] We don't have an official ombudsman, and perhaps it's worth putting some effort into finding one.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
But, of course, things are what they are. Still, it is hardly any of my business, as a participant of Less Wrong, what discussions you have elsewhere, on some other forum. Why should it be?
Of course, it would be quite beyond the pale if the outcomes of those discussions were used in deciding (by those who have the authority to decide these things—basically, I mean the admins of Less Wrong) how to treat someone here!
In short, I am saying: in other places, discuss whatever you want to discuss (assuming your discussions are appropriate thereto… but, in any case—not my business). None of that should affect any discussions here. “I propose to treat <Less Wrong participant X> in such-and-such a way—why? because he said or did so-and-so, in another place entirely”—this ought not be acceptable or tolerated.
It sounds more like a defense of discussing a political specific by means of abstraction.
Zack said:
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
What, realistically, do you expect the atheist—or the racist, or me—to do? Am I supposed to just passively accept that all of my thoughts about epistemology are tainted and unfit for this forum, because I happen to be interested in applying epistemology to other topics (on a separate website, under a pseudonym)?
Which isn't saying specifics should be discussed by discussing abstracts, it says abstracts should be discussed, even when part of the motivation for discussing the abstract is specific. Like, people should be able to collaborate on statistics textbooks even if they don't agree with their co-authors' specific applications of statistics to their non-statistical domains. (It would be pretty useless to discuss abstracts if there we no specific motivations, after all...)
Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn't easily encompass other points of view.
Why?
Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK.
How exactly does this differ from, "if the truth is on the wrong side politically, so much the worse for the truth"? Should we limit ourselves to abstract discussions that don't constrain our anticipations on things we care about?
I've noted in at least some of your posts that I don't find your abstractions very compelling without examples, and I that I don't much care for the examples I can think of to reify your abstractions.
"Where to Draw the Boundaries?" includes examples about dolphins, geographic and political maps, poison, heaps of sand, and job titles. In the comment section, I gave more examples about Scott Alexander's critique of neoreactionary authors, Müllerian mimickry in snakes, chronic fatigue syndrome, and accent recognition.
I agree that it's reasonable for readers to expect authors to provide examples, which is why I do in fact provide examples. What do you want from me, exactly??
Blocking Zack isn't an appropriate response if, as Vanessa thinks, Zack is attacking her and others in a way that makes these attacks hard to challenge directly. Then he'd still be attacking people even after being blocked, by saying the things he says in a way that influences general opinion.
Feelings are information, not numbers to maximize.
It's possible that your actual concern is with "I feel" language being used for communication.
You're right that "feelings are information, not numbers to maximize" and that hiding a user's posts is often not a good solution because of this.
I don't think Christian is making this mistake though.
When someone is suffering from an injury they cannot heal, there are two problems, not one. The first is the injury itself — the broken leg, the loss of a relationship, whatever it may be. The second is that incessant alarm saying “THIS IS BAD THIS IS BAD THIS IS BAD” even when there’s nothing you can do.
If you want to help someone in this situation, it’s important to distinguish (and help them distinguish) between the two problem and come to agreement about which one it is that you should be trying to solve: are we trying to fix the injury here, or are we just trying to become more comfortable with the fact that we’re injured? Even asking this question can literally transform the sensation of pain, if the resulting reflection concludes “yeah, there’s nothing else to do about this injury” and “yeah, actually the sensation of pain itself isn’t a problem”.
Earlier in this discussion, Vanessa said “I feel X”, and the response she got was taking the problem to be ab...
Quick clarification: That is not what that feature does. It currently only prevents users from commenting on any of your blogposts. I feel quite hesitant to make content-blocking too easy on LessWrong for a variety of reasons, though I am not fundamentally opposed to it. Will see whether I can write my full thoughts up sometime soon.
Actually, I would like clarification from the LW admins on this. As I understood it, the “banned user” feature prevents the given user from commenting on your posts (and… responding to your comments, maybe? I’m not clear on this part either). I am not aware of it doing anything to prevent you from seeing the “banned” user’s posts/comments which they post elsewhere.
That having been said, GreaterWrong does have an “ignore user” feature (which automatically collapses comments from a given user). (Being GW-specific, of course, it does nothing for you if you prefer to use the official site to browse LW content.)
Note: you can generally talk about political stuff on your personal blog section. Part of the point of the frontpage/personal-blog distinction is so that there can be a bit of soft-pressure there without actually preventing people from talking about things.
There are certain areas that we might need to make individual judgement calls about (see Vaniver's comment elsethread). And in general when discussing hot-button political issues I'd suggest you reflect on your goals and life choices (since I think it's an easy domain to think you're discussing something important when you're mostly not). But that's different from a ban.
the meta level is too vague. That is, the error is in the way the abstract reasoning is applied to case X (it's just not the right model), rather than in the abstract reasoning itself
Why not write a meta-level post about the general class of problem for which the abstract reasoning doesn't apply? That could be an interesting post!
I'm guessing you might be thinking something along the lines of, "The 'draw category boundaries around clusters of high density in configuration space' moral doesn't apply straightfowardly to things that are socially constructed by collective agreement"? (Examples: money, or Christmas. These things exist, but only because everyone agrees that they exist.)
I personally want to do more thinking about how social construction works (I have some preliminary thoughts on the matter that I haven't finished fleshing out yet), and might write such a post myself eventually!
Said comments fairly reliably and prolifically on this topic and it feels important to note that no, this isn’t the default culture people should be expecting to be consensus on LW
Could you expand a bit on what you’re referring to when you say “this”? I’ve said a few different things in my comments on this topic; it seems important to clarify which things you don’t agree with, or don’t judge to be the consensus (or the intended consensus), etc.
I agree that it's possible for feelings to be relevant (or for factual beliefs to be relevant). But discouragement of discussion shouldn't be enacted through feelings, feelings should just be info that prompts further activity, which might have nothing to do with discouragement of discussion. So there is no issue with Vanessa's first comment and parts of the rest of the discussion that clarified the situation. A lot of the rest of it though wasn't constructive in building any sort of valid argument that rests on the foundation of info about feelings.
(Posting this as a top-level comment because there’s no obvious place downthread to put it, and also so that it doesn’t get lost in the shuffle.)
Everyone who is talking about whether it is desirable to consider the feelings of your interlocutors, and what to do about those feelings, etc.—on all sides of the discussion—would do well to read carefully the comments section of this old Yvain post. Pay special attention to the comments by Vladimir_M.
Wow, it's neat that the LW 2 codebase gives you tools to move a derailed thread to its own post! Good job, whoever wrote that feature!
Right, I agree that it doesn't sound difficult from a web-development perspective, but I also think that only praising difficult-to-implement features would create the wrong incentives.
In the moving internal link in the thread broke. There is also no hint in the orignal thread that a meta-level discussion spawned.
I moved the big meta-level comment thread from "Yes Requires the Possibility of No" over to here, since it seemed mostly unrelated to that top-level post. This not being on frontpage also makes it easier for people to just directly discuss the moderation and meta-level norms.