Zack_M_Davis

Wikitag Contributions

Comments

Sorted by

Thanks, I see it now. I think the insult has more to do with the relationship-terminating aspect (the part where Alice sets a lifetime birthday budget, or the moderator says he's changing his stance towards allegedly low-interpretive-effort comments going forward) than the mere tracking of time costs. When my busy friend says she can only talk on the phone for twenty minutes (and that affects what I want to talk about with our limited time), it doesn't feel insulting because the budget is just for that call, not our entire relationship.

Effort is a cost, not a benefit.

I'm not convinced Ben was making that mistake (which I expect him to also be attuned to noticing, because he wrote about it two years earlier): I read it as, given unusual but not praiseworthy-in-itself effort expenditure, it makes sense to flag it.

I'd rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.

A key reason to favor behavioral rules over trying to directly optimize outcomes (even granting that enforcement can't be completely mechanized and there will always be some nonzero element of human judgement) is that act consequentialism doesn't interact well with game theory, particularly when one of the consequences involved is people's feelings.

If the popular kids in the cool kids' club don't like Goldstein and your only goal is to make sure that the popular kids feel comfortable, then clearly your optimal policy is to kick Goldstein out of the club. But if you have some other goal that you're trying to pursue with the club that the popular kids and Goldstein both have a stake in, then I think you do have to try to evaluate whether Goldstein "did anything wrong", rather than just checking that everyone feels comfortable. Just ensuring that everyone feels comfortable at all costs, without regard to the reasons why people feel uncomfortable or any notion that some reasons aren't legitimate grounds for intervention, amounts to relinquishing all control to anyone who feels uncomfortable when someone else doesn't behave exactly how they want.

Something I appreciate about the existing user ban functionality is that it is a rule-based mechanism. I have been persuaded by Achmiz and Dai's arguments that it's bad for our collective understanding that user bans prevent criticism, but at least it's a procedurally "fair" kind of badness that I can tolerate, not completely arbitrary tyranny. The impartiality really helps. Do you really want to throw away that scrap of legitimacy in the name of optimizing outcomes even harder? Why?

I think just trying to 'follow the rules' might not succeed at making everyone feel comfortable interacting with you

But I'm not trying to make everyone feel comfortable interacting with me. I'm trying to achieve shared maps that reflect the territory.

A big part of the reason some of my recent comments in this thread appeal to an inability or justified disinclination to convincingly pretend to not be judgmental is because your boss seems to disregard with prejudice Achmiz's denials that his comments are "intended to make people feel judged". In response to that, I'm "biting the bullet": saying, okay, let's grant that a commenter is judging someone; to what lengths must they go to conceal that, in order to prevent others from predictably feeling judged, given that people aren't idiots and can read subtext?

I think there's something much more fundamental at stake here, which is that an intellectual forum that's being held hostage to people's feelings is intrinsically hampered and can't be at the forefront of advancing the art of human rationality. If my post claims X, and a commenter says, "No, that's wrong, actually not-X because Y", it would be a non-sequitur for me to reply, "I'd prefer you engage with what I wrote with more curiosity and kindness." Curiosity and kindness are just not logically relevant to the claim! (If I think the commenter has misconstrued what I wrote, I could just say that.) It needs to be possible to discuss ideas without getting tone-policed to death. Once you start playing this game of litigating feelings and feelings about other people's feelings, there's no end to it. The only stable Schelling point that doesn't immediately dissolve into endless total war is to have rules and for everyone to take responsibility for their own feelings within the rules.

I don't think this is an unrealistic superhumanly high standard. As you've noticed, I am myself a pretty emotional person and tend to wear my heart on my sleeve. There are definitely times as recently as, um, yesterday, when I procrastinate checking this website because I'm scared that someone will have said something that will make me upset. In that sense, I think I do have some empathy for people who say that bad comments make them less likely to use the website. It's just that, ultimately, I think that my sensitivity and vulnerability is my problem. Censoring voices that other people are interested in hearing would be making it everyone else's problem.

Was definitely not going to make an argument from authority, just trying to understand your world view.

Right. Sorry, I think I uncharitably interpreted "Do you think others agree?" as an implied "Who are you to disagree with others?", but you've earned more charity than that. (Or if it's odd to speak of "earning" charity, say that I unjustly misinterpreted it.)

the argument that persuaded me initially especially doesn't need to be good

Right. I tried to cover this earlier when I said "(a cleaned-up refinement of) my thought process" (emphasis added). When I wrote about eschewing "line[s] of reasoning other than the one that persuades me", it's persuades in the present tense because what matters is the justifactory structure of the belief, not the humdrum causal history.

you said 'thought', which maybe I should have criticized because it's not a thought. How annoyed you are by something isn't an intellectual position, it's a feeling. It's influenced by beliefs about the thing, but also by unrelated things

There's probably a crux somewhere near here. Your formulation of #4 seems bad because, indeed, my emotions shouldn't be directly relevant to an intellectual discussion of some topic. But I don't think that gives you license to say, "Ah, if emotions aren't relevant, therefore no harm is done by rewriting your comments to be nicer," because, as I've said, I think the nicewashing does end up distorting the content. The feelings are downstream of the beliefs and can't be changed arbitrarily.

Excellent question, thanks for commenting! (And for your patience.) The part of the original post that that Tweet is summarizing are the paragraphs after "Suppose I'm selling you some number of gold and silver bars [...]". As you observe, whether it's "lying" to use a category label depends on what the label is construed to "canonically" mean. The idea here is that, as far as signal processing goes, there's an isomorphism between "lying p% of the time" with respect to the maximally-informative categories, and choosing different categories that conceal information. So if the new categories aren't deceptive because the receiver knows about them, is lying therefore not deceptive if the speaker already has it "priced in" that the sender lies this-and-such proportion of the time? I discuss this problem further in "Maybe Lying Can't Exist?!" and "Comment on 'Deception is Cooperation'".

Sorry, I still don't think I get it. (I could guess, but I probably wouldn't get it right.) In the footnote of the comment you link, Pace said he was allocating 2 hours to the thread, and in a later comment, he said he'd spent 30 minutes so far. A little unconventional, but seems OK to me? (Everyone faces the problem of how to budget their limited time and attention; being transparent about how you're doing it shouldn't make it worse.)

I don't get it. What's insulting about someone disclosing how much time they spent writing something?

and there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.

Great, so all you need to do is make a rule specifying what speech constitutes "retribution" or "counterpunishment" that you want to censor on those grounds.

Maybe the rule could be something like, "No complaining about being banned by a specific user (but commenting on your own shortform strictly about the substance of a post that you've been banned from does not itself constitute complaining about the ban)" or "No arguing against the existence on the user ban feature except in designated moderation threads (which get algorithmically deprioritized in the new Feed)."

It's your website! You have all the hard power! You can use the hard power to make the rules you want, and then the users of the website have a clear choice to either obey the rules or be banned from the site. Fine.

What I find hard to understand is why the mod team seems to think it's good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced. Telling people to "stop optimizing in a fairly deep way" is not a rule because of how vague and potentially all-encompassing it is. Telling people to avoid "mak[ing] people feel judged or not" is not a rule because I don't have control over how other people feel.

"Don't tell people 'I'm judging you about X'" is a rule. I can do that.

What I can't do is convincingly pretend to be a person with a completely different personality such that people who are smart about subtext can't even guess from subtle details of my writing style that I might privately be judging them.

I mean, maybe I could if I tried very hard? But I have too much self-respect to try. If the mod team wants to force temperamentally judgemental people to convincingly pretend to be non-judgemental, that seems really crazy.

I know, the mods didn't say "We want temperamentally judgemental people to convincingly pretend to have a completely different personality" in those words; rather, Habryka said he wanted to "avoid a passive aggressive culture tak[ing] hold". I just don't see what the difference is supposed to be in practice.

Rudeness and offensiveness are, in the general case, two-place functions: text can be offensive to some particular reader, but short of unambiguous blatant insults, there's not going to be a consensus about what is "offensive", because people vary widely (both by personal disposition and vagarious incentives) in how easy they are to offend.

When it is denied that Achmiz's comments are offensive, the claim isn't that no one is offended. (That would be silly. We have public testimony from people who are offended!) The claim is that the text isn't rude in a "one-place" sense (no personal insults, &c.).

The reason that "one-place" rudeness is the relevant standard is because it would be bad if a fraction of easily-offended readers (even a substantial fraction—I don't think you can defend the adjective "overwhelming") could weaponize their emotions to censor expressions of ideas that they don't like.

For example, take Achmiz's January 2020 comment claiming that, "There is always an obligation by any author to respond to anyone’s comment along these lines. If no response is provided to (what ought rightly to be) simple requests for clarification [...] the author should be interpreted as ignorant."

The comment is expressing an opinion about discourse norms ("There is always an obligation") and a belief about what Bayesian inferences are warranted by the absence of replies to a question ("the author should be interpreted as ignorant"). It makes sense that many people disagree with that opinion and that belief (say, because they think that some of the questions that Achmiz thinks are good, are actually bad, and that ignoring bad questions is good). Fine.

But beyond mere disagreement, to characterize such a comment as offensive (because it criticizes people who don't respond to questions), is something I find offensive. (If you're thinking of allegedly worse behavior from Achmiz than this January 2020 comment, you're going to need to provide the example.) Sometimes people who use the same website as you have opinions or beliefs that imply that they disapprove of your behavior! So what? I think grown-ups should be able to shrug this off without calling for draconian and deranged censorship policies. The mod team should not be pandering to such pathetic cry-bullying.

Why do you have this position? (i.e., that comments aren't about impact).

Because naïvely optimizing for impact requires concealing or distorting information that people could have used to make better (more impactful) decisions in ways that can't realistically be anticipated by writers naïvely optimizing for impact.

Here's an example from Ben Hoffman's "The Humility Argument for Honesty". Suppose my neck hurts (coincidentally, after trying a new workout routine), and after some internet research, I decide I have neck cancer. The impact-oriented approach would call for me to do my best to convince my doctor I have neck cancer, to make sure that I get the chemotherapy I'm sure I need. The honesty-oriented approach would call for me to explain to my doctor the evidence and reasoning for why I think I have neck cancer.

Maybe there's something to be said for the impact-oriented approach if my self-diagnoses are never wrong. But if there's a chance I could be wrong, the honesty-oriented approach is much more robust. If I don't really have neck cancer and describe my actual symptoms, the doctor has a chance to help me discover my mistake.

Is your default model of LWians that most of them have this position?

No. But that's OK with me, because I don't regard "other people who use one of the same websites as me" as a generic authority figure.

it still seems like the causal stream here is clearly bad vibes -> people complain to harbyka -> Said gets in trouble?

Yes, that sounds right. As you've gathered, I want to delete the second arrow rather than altering the value of the "vibes" node.

What would be your honest probability assessment that a religious person reads this and actually goes that route

Sorry, phrasing it in terms of "someone focused on harm"/"a potential convert being warned" might have been bad writing on my part, because what matters is the logical structure of the claim, not whether some particular target audience will be persuaded.

Suppose I were to say, "Drug addiction is bad because it destroys the addict's physical health and ability to function in Society." I like that sentence and think it is true. But the reason it's a good sentence isn't because I'm a consequentialist agent whose only goal is to minimize drug addiction, and I've computed that that's the optimal sentence to persuade people to not take drugs. I'm not, and it isn't. (An addict isn't going to magically summon the will to quit as a result of reading that sentence, and someone considering taking drugs has already heard it and might feel offended.) Rather, it's a good sentence because it clearly explains why I think drug addiction is bad, and it would be dishonest to try to persuade some particular target audience with a line of reasoning other than the one that persuades me.

Deliberately inserting unhelpful vibes into your comment is like uploading a post with formatting that you know will break the editor and then being like "well the editor only breaks because this part here is poorly programmed, if it were programmed better then it would do fine". In any other context this would-pattern match to obviously foolish behavior. ("I don't look before crossing the sidewalk because cars should stop.")

I don't think those are good metaphors, because the function of a markup language or traffic laws is very different from the function of blog comments. We want documents to conform to the spec of the markup language so that our browsers know how to render them. We want cars and pedestrians to follow the traffic law in order to avoid dangerous accidents. In these cases, coordination is paramount: we want everyone to follow the same right-of-way convention, rather than just going into the road whenever they individually feel like it.

In contrast, if everyone writes the blog comment they individually feel like writing, that seems good, because then everyone gets to read what everyone else individually felt like writing, rather than having to read something else, which would probably be less informative. We don't need to coordinate the vibes. (We probably do want to coordinate the language; it would be confusing if you wrote your comments in English, but I wrote all my replies in French.)

the thing that was actually causally upstream of the details in Said's message [...] was that he thinks religion is dumb and bad, which influenced a parameter sent to the language-generation module that output the message, which made it choose language that sounded more harsh. [...] The vibe isn't an accidental by-product

Right, exactly. He thinks religion is dumb and bad, and he wrote a comment that expresses what he thinks, which ends up having harsh vibes. If the comment were edited to make the vibes less harsh, then it would be less clear exactly how dumb and bad the author thinks religion is. But it would be bad to make comments less clearly express the author's thoughts, because the function of a comment is to express the author's thoughts.

whatever you want to improve, more awareness of what's actually going going to be good

Absolutely. For example, if everyone around me is obfuscating their actual thoughts because they're trying to coordinate vibes, that distortion is definitely something I want to be tracking.

to just give a sense of my actual views on this, the whole thing just seems ridiculously backwards

The feeling is mutual?!

Load More