Duncan Sabien (Inactive)

Sequences

Civilization & Cooperation

Comments

Sorted by

But I would definitely consider him top-decile rude and, idk, bruising in conversation within those communities; to me, and I think to others, he stands out as notably likely to offend or be damagingly socially oblivious.

I'm not going to carry on checking this thread; I mostly just wanted to drop my one top-level response.  But in response to this, my main trigger is something like "okay, how could I assess a question like this in contact with how I think about social dark matter and DifferentWorlds™?"

Mark Rosewater, head designer of Magic: the Gathering, is constantly fielding questions on his blog from players who are like "literally no one is in favor of [extremely popular thing], I've been to eight different game shops and five tournaments and talked about it with, no exaggeration, over a hundred people."

And I can see the mistake those people are making, because I'm outside of the information bubble they're caught in.  It's trickier to catch the mistake when you're inside the bubble.

Or, to put it another way: most of the people that like Nate's conversational style and benefit greatly from it and find it a breath of fresh air aren't here in the let's-complain-about-it conversation.

I feel bad posting this. It's a bit personal, or something. But he's writing a book, and talking to important people about it, so it matters. 

It does matter.  And by all accounts, it's going very well.  That's evidence upon which someone could choose to update (on, e.g., questions like "am I representative of the kind of people Nate's talking to, who matter with regards to this whole thing going well over the next six months?").

At the very least, I can confidently say that I know of no active critic-of-Nate's-style who's within an order of magnitude of having Nate's positive impact on getting this problem taken seriously.  Like, none of the people who are big mad about this are catching the ears of senators with their supposedly better styles.

This post seems to me like very strong evidence that Nate was absolutely correct to block Alex.

For context, I have a deep and abiding fondness for both Alex and Nate, and have spent the last several years off to the side sort of aghast and dismayed at the deterioration in their relationship.  I've felt helpless to bridge the gap, and have mostly ended up saying very little to either party about it.

But the above feels to me like a particularly grotesque combination of [petty] and [disingenuous], and it's unfortunately in-line with my sense that Alex has been something-like hounding Nate for a while. Actively nursing a grudge, taking every cheap opportunity to grind an axe, deliberately targeting "trash Nate's reputation via maximally uncharitable summaries and characterizations" rather than something like "cause people to accurately understand the history of our disagreement so they can form their own judgments," locating all of the grievance entirely within Nate and taking no responsibility for his own contributions to the dynamic/the results of his consensual interactions and choices, etc. etc. etc.  I've genuinely been trying to cling to neutrality in this feud between two people I respect, but at this point it's no longer possible.

(I'll note that my own sense, looking in from the outside, is that something like a full year of friendly-interactions-with-Nate passed between the conversations Alex represents as having been so awful, and the start of Alex's public vendetta, which was more closely coincident with some romantic drama.  If I had lower epistemic standards, I might find it easy to write a sentence like "Therefore, I conclude that Alex's true grievance is about a girl, and he is only pretending that it's about their AI conversations because that's a more-likely-to-garner-sympathy pretext."  I actually don't conclude that, because concluding that would be irresponsible and insufficiently justified; it's merely my foremost hypothesis among several.)

A small handful of threads in response to the above:

Stuck in the monkey frame

I was once idly complaining about being baffled by people's reactions to some of my own posts and comments, and my spouse replied: "maybe it's because people think you're trying to upset people, when in fact you're doing something-other-than-not-trying-to-upset-people, and few people are capable of imagining motives other than 'try to cause other people to feel a certain way.'"

...which sure does feel like it fits the above.  "If someone who was just a monkey-obsessed monkey did this, it would have been in order to have such-and-such impact on the other monkeys; therefore that's definitely what happened."

In fact (as the above post conspicuously elides), Nate left those comment threads up for a while, and explicitly flagged his intent to delete them, and gave people a window to change his mind or take their screenshots or whatever, which is not at all consistent with the hypothesis "trying to hide stuff that makes him look bad."

Nate's post was making a general point—don't preemptively hamstring your message because you're worried they won't accept your true belief.  He presented evidence of this strategy working surprisingly well in the format of the book that he and Eliezer have written, which is intentionally not-cringing.

Saying "ah, but when you talk to me in person I find it unpleasant, and so did these five other people" is, as Nate correctly characterized, barely topical.  "Underdeployed Strategy X has powerful upsides; here's evidence of those upsides in a concrete case" is not meaningfully undercut by "your particular version of a thing that might not even be intended as a central example of Strategy X has sometimes had negative side effects."

In other words, on my understanding, Nate didn't delete the comment thread "because it made him look bad," he deleted it because it wasn't the conversation he was there to have, and as more and more depressingly monkeydrama comments piled up, it became a meaningful distraction away from that conversation.

(LessWrong does this all the time, which is a huge part of why I find it counterproductive to try to have thoughts, here; the crowd does not have wisdom or virtue and upvotes frequently fail to track the product of [true] and [useful].)

 

Depressingly in line with expectation

I myself have long had a policy of blocking people who display a sufficiently high product of [overconfident] and [uncharitable].  Like, the sort of person who immediately concludes that X is downstream of some specific shittiness, and considers no other hypotheses, and evinces no interest in evidence or argument (despite nominal protestations to the contrary).

Once in a while, a bunch of the people I've blocked will all get together to talk shit about me, and sometimes people will send me screenshots, and guess what sorts of things they have to say?

Separate entirely from questions of cause or blame, (my understanding of) Nate's experience of Alex has been "this guy will come at me, uncharitably, without justification, and will disingenuously misrepresent me, and will twist my words, and will try to cast my actions in the most negative possible light, and will tenaciously derail conversations away from their subject matter and towards useless drama."  (My shoulder model of) Nate, not wanting that, blocked Alex, and lo—this post appears, conspicuously failing to falsify the model.

(Parentheticals because I do not in fact have firsthand knowledge of Nate's thinking here.)

I am sad about it, but by this point I am not surprised.  When you block someone for being vindictive and petty, they predictably frame that self-protective action in vindictive, petty ways.  Some people are caught in a dark world.

 

We used to care about base rates

By my read, Nate speaks to several hundred people a year about AI, and has had ongoing, in-depth relationships (of the tier he had with Alex) with at least twenty people and possibly as many as a hundred.

Lizardman's constant is 4%.  I'll totally grant that the rate of people being grumpy about their conversational interactions with Nate exceeds lizardman; I wouldn't be surprised if it climbs as high as (gasp) 15%.

But idk, "some people don't like this guy's conversational style" is not news.  Even "some people don't like this guy's conversational style enough to be turned away from the entire cause" is not news, if you put it in context along with "btw it's the same conversational style that has drawn literally thousands of people toward the cause, and meaningfully accelerated dozens-if-not-hundreds, and is currently getting kudos from people like Schneier, Bernanke, and Stephen Fry."

I have been directly, personally involved in somewhere between ten and a hundred hours of conversation and debrief and model-sharing and practice in which Nate was taking seriously the claim that he could do better, and making real actual progress in closing down some of his failure modes, and—

I dunno.  I myself had a reputation for getting into too many fights on Facebook, and finally I was like "okay, fine, fuck it," and I (measurably, objectively) cut back my Facebook fights by (literally) 95%, and kept it up for multiple years.

Do you think people rewarded me, reputationally, with a 95% improvement in their models of me?  No, because people aren't good at stuff like that.  

The concerns about Nate's conversational style, and the impacts of the way he comports himself, aren't nonsense.  Some people in fact manage to never bruise another person, conversationally, the way Nate has bruised more than one person.

But they're objectively overblown, and they're objectively overblown in exactly the way you'd predict if people were more interested in slurping up interpersonal drama than in a) caring about truth, or b) getting shit done.

If you in fact care about tinkering with the Nate-machine, either on behalf of making it more effective at trying to save the world, or just on behalf of niceness and cooperation and people having a good time, I think you'll find that Nate is 90th-percentile or above willing-and-able to accept critical feedback.

But constructive critical feedback doesn't come from seeds like this.  Strong downvote, with substantial disappointment.

Just noting that I have deleted a comment whose entire content was "I dropped this into an AI and it gave me the following summary."

The text of this essay is public, and the public will do with it what they will; I'm aware of (and somewhere between "resigned to" and "content about") the fact that a certain kind of reader is impatient, and instead of choosing between "read it, and get the value" and "don't read it, and preserve my time/attention for other things" tries to shoot for the fabricated third option of "don't spend the time but somehow get the value anyway" via things like AI summary.  It's fine for individuals to choose to make that mistake.

However, I'm not willing to let the ... tick? ... of an AI summary (that does in fact fail to convey the thing while giving the false impression of conveying the thing) to just live parasitically right here attached to the body of the essay.  It's misleading in the sense that the-thing-that-happens-to-you-when-you-spend-time-in-a-gestalt can't in fact be captured and conveyed by the skeleton outline.

(There are some places where all you need is the skeleton outline; I'm not anti-distillation or anti-summary in a general sense.  And again, individuals are free to consume AI summaries both when it's a good idea and when it's not; I'm not a cop.  I'm just not going to signal-boost those myself, nor allow them to piggyback.)

Tee hee ... allow me to recount the story of the one other person who lives in the same mental bucket in my head as Mitchell (lightly edited from a FB post from late 2021):

When I say things like "other people genuinely feel like a different species to me," it's because it would literally never occur to me to:

- Agree to meet with Person X, who had a grievance with me

- Tell Person X what things I was upset about, because Person X started off the meeting by saying "I think my job here is to listen, first."

- Listen to, accept, and thank Person X, explicitly, for their subsequent apology, which they offered without asking for any sort of symmetrical concession from me

- Maybe even cry a little, because things have been hard, and accept their clumsy attempts to offer a little comfort and empathy

- Years later, after no other substantive interactions of any kind, join in a dogpile against Person X behind their back where I besmirched their apology and insinuated that it was insufficient and rooted in an invalid motivation

- Outright lie in that dogpile, and claim that Person X had insisted that I jump through hoops that they never asked me to jump through

- Give no sign of any of this ongoing resentment in private communication initiated by Person X on that very same day

Like, there's a whole bunch of people out there for whom this is just ... business as usual, nothing to see here.

I think there are indeed enmities that call for that kind of blatantly adversarial two-facedness, but I think they take WAY more in the way of inciting incident than anything I've ever done, had done to me, or seen happen around me with my own two eyes.

Alternate title: This Is The Sort Of Thing That Makes Coordination Hard

...all of which is to say, re: "you can still provide people with data that loudly and inarguably contradicts the dark hypothesis" ... I do want to emphasize just how hard it is to create that data.  The test can't actually be run, but I would bet several hundred dollars to someone's one dollar that a panel of a dozen neutral observers watching the whole interaction described above from start to finish would have agreed that it was a clearly unpressured, sincere, caring, and genuine attempt to rebuild a bridge, but this did not stop the other party (who is a well-respected member of the social group that calls itself the rationalist community, e.g. gave multiple talks at LessOnline this past weekend) from being ... well, shitty.  Really, really, really shitty.

(This is a misread of the seventh guideline.  The seventh guideline doesn't say that you shouldn't hypothesize about what other people believe, it says that you should flag those hypotheses so that they can't possibly be mistaken for assertions of fact.  That's why the above says "my understanding" and "unless I miss him" rather than just saying "Zack doesn't think so either."  I'd be interested in a statement of what Zack-guideline the above "here's what I think he believes?" falls afoul of.)

Yeah I'm going to go back in and add links, partly due to this comment thread and partly at Zack's (reasonable, correct!) request; I should've done that in the first place and lose points for not doing so.  Apologies, Zack.

I think this is a restatement of the thesis (or at least, I intended the "some people are actually surrounded by """worse""" people" to be part of the claims of the above).

See Zack's engagement with Basics of Rationalist Discourse, and multiple subsequent essays.

As an aside, "wow, I support this way less than I otherwise would have, because your (hypothesized) straightforward diagnosis of what was going on in a large conflict over norms seems to me to be kind of petty" is contra both my norms and my understanding of Zack's preferred norms; unless I miss him entirely neither one of us wants LessWrong to be the kind of place where that sort of factor weighs very heavily in people's analysis.

(I already lost the battle, though; the fact that socially-motivated moves like the above rapidly become highly upvoted is a big chunk of why I gave up on trying to be on LessWrong generally, and why all my content goes elsewhere now.)

(DunCon was more in this direction than LessOnline, downstream of me feeling similarly, and DunConII will be substantially further, and also it's not like "wait until then" is the thing I'm saying, but like.  Hi.)

There's now a site and schedule in addition to the flyer above.

Load More