Nothing is quite so annoying as seeing another person do the thing you wish you yourself would not do.
It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them.
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
TRIGGER: Notice that the other person isn't convinced by my argument
ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?
The fact that the other person isn’t convinced by your argument is only evidence that you’re mistaken to the extent you’d expect this other person would be convinced by good arguments. For your friends and people who have earned your respect this action is a good response, but in the more general case it might be hard to get yourself to apply it faithfully because really, when the flat earther isn’t convinced are you honestly going to consider whether you’re actually the one that’s wrong?
The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance or else finding some real humility. For the trigger you give, I’d suggest the tentative alternate action of “stick my neck out and offer for it to be chopped off”, and only if that action makes you feel a bit uneasy do you start hedging and wondering “maybe I’m overstepping”.
For example, maybe you’re arguing politics and they scoffed at your assertion that policy X is better than policy Y or whatever, and it strikes you as arrogant for them to just dismiss out of hand ideas which you’ve thought very hard about. You could wonder whether you’re the arrogant one, and that you really should have thought harder before presenting such scoffable ideas and asked for their expertise before forming an opinion — and in some cases that’ll be the right play. In other cases though, you can can be pretty sure that you’re not the arrogant one, and so you can say “you think I’m being arrogant by thinking I can trust my thinking here to be at least worth addressing?” and give them the chance to say “Yes”.
You can ask this question because “I’m not sure if I am being arrogant here, and I want to make sure not to overstep”, but you can also ask because it’s so obvious what the answer is that when you give them an opening and invite their real belief they’ll have little option but to realize “You’re right, that’s arrogant of me. Sorry”. It can’t be a statement disguised as a question and you really do have to listen to their answer and take it in whatever it is, but you don’t have to pretend to be uncertain of what the answer is or what they will believe it to be under reflection. “Hey, so I’m assuming you’re just acting out of habit and if so that’s fine, but you don’t really think it’s arrogant of me to have an opinion here, do you?” or “Can you honestly tell me that I’m being arrogant here”. It doesn’t really matter whether you say it because “you want to point out to people when they aren’t behaving consistently with their beliefs”, or because “I want to find out whether they really believe that this behavior is appropriate”, or because “I want to find out whether I’m actually the one in the wrong here”. The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.
Ah. I think this didn't occur to me because I have a different set of habits for "Mostly don't end up in conversations with flat earthers in the first place." This advice was generated in the process of interacting with coworkers, roommates, and friends that are pre-filtered for being people I respect.
There certainly are people I sometimes bump into who aren't filtered for such, who sometimes I have important disagreements with. In those cases, the correct approach depends a bit on the situation.
I think I'd stick by this advice for discussions where the outcome actually matters (i.e. where you're not just talking with some internet rando for fun. Conversations with actual stakes, where you're building a product).
That said, I think I mostly agree with the particular scenarios you outline here, in particular this bit:
The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.
Though this bit here...
The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance
...feels like it's approaching a somewhat different problem than the one I was thinking of when I wrote this post. (to be fair, I did write the post to be pretty general)
Arrogance / modesty wasn't what I'm worried about here. The axis that was most salient to me was more like guardedness/defensiveness. If they seem defensive, or digging their heels in, my first impulse is usually to push harder to get them to admit their wrongness. But, that usually makes things worse, not better.
My experience is that people will mirror whatever cognitive algorithms I'm visibly running – if I'm listening, they're more likely to listen. If I'm confidently asserting a view, they tend to be confidently asserting a view. Whether I'm being too modest / arrogant doesn't really matter much for this problem.
I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".
The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).
When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.
My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.
While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.
The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.
My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.
Thanks, this was a good neat point that gives me a conceptual handle for thinking about the overall problem.
Note that very often, your mistake is slightly different than the one you're about to accuse them of, in ways that can make it hard to match to your complaint. In addition to the "check if I'm doing that thing", you might expand to "check for my behaviors which might contribute to the thing".
The good news is that the virtuous cycle here also works: I've found that if one person is consistently unusually virtuous in their conversations and arguments, a little bubble of sanity spreads around that person to everyone in the vicinity over time.
I have a much longer guide on how to do this practically, from before I posted on LW, but caveat is it's quite long and isn't really written for a LW audience: https://grandunifiedcrazy.com/2019/09/14/success-over-victory-conflict-resolution/
a) I liked reading your guide: You managed to include many important LW-related concepts while still keeping a hands-on feeling. This makes it a nice reference for people who do not enjoy a more technical/analytical approach. Have you considered creating a link-post on lesswrong?
b) You write:
The good news is that the virtuous cycle here also works: I've found that if one person is consistently unusually virtuous in their conversations and arguments, a little bubble of sanity spreads around that person to everyone in the vicinity over time.
This seems like a more deliberate version of what Scott Alexander describes in Different Worlds? (a term that is used is 'niceness fields')
I would be very interested in approaches to actively create 'bubbles of sanity' or 'niceness fields'.
The points 'aim for success, not victory' and 'assume good faith' of your guide seem important for this. A big part is probably to clearly communicate that the other's status is in no way being questioned and thus need not be defended. In my experience, this part of communication is usually not deliberate (or even conscious) and hard to change. Of course, even small improvements can be valuable.
Yes, I didn’t have niceness fields intentionally in mind when I commented, but it is definitely the same idea.
Have you considered creating a link-post on lesswrong?
That was the first thing I did when I created an account here. It got no upvotes and did not get promoted to front page, so... maybe it was just too much to digest from somebody with no karma at the time?
(Speaking as a mod) Link posts from new users definitely (unfortunately) have a much higher burden than regular posts. This isn't so much of a conscious choice, as result of how much effort they take to evaluate.
There are no full-time mods, just people who do moderation in addition to other LW duties. We don't have time to read each post thoroughly before frontpaging it. I typically skim a post to get a sense of it's overall topic, and see if the author is someone who has a track record of writing frontpage material. The default outcome is for things not to get frontpaged.
Link posts face the additional hurdle of "I have to click through to another site to read the post", which is already a trivial inconvenience. Moderators actually see a hover-over of the post when they look through their moderation queue, but link-posts don't appear in the hoverover because they're offsite.
Then, on top of that, link posts get less visibility anyways – the same reasons that make them more effortful for a moderator to evaluate make people less likely to click through and read them. (i.e when people hoverover the post, or see it in Recent Discussion, they don't get to automatically skim it, they have to evaluate an indirect summary)
I generally recommend fully crossposting things rather than just link-posting them with a small summary, you'll typically get more engagement that way.
For the record, it might be worth turning that old post into a crosspost, I'd be up for frontpaging it this time around. It does look interesting.
If I edit the existing post, will it end up in oblivion anyway because it's old now? Or does the clock restart when it gets promoted to frontpage? I can delete/recreate if that would be more effective.
Another thought: because it's 6k words, it might be worth splitting across a few posts and creating a sequence out of it. I don't see a way to do that in the editor, so it might require privileges? I'm also not sure if it would be appropriate for this or not.
edit: also (and this is getting quite far afield here) - I've been blogging for quite a while on rational-adjacent topics before I started posting here. I imagine a flood of cross-posts of previous work would be frowned upon, but also the line seems kind of blurry given that's exactly what this comment thread is already discussing.
There are tools to give old posts new frontpage life, which I'd be happy to use here (you can send me a PM about it when you're ready). But, if you want to go the sequence route instead:
We deliberately make it less obvious to new users how to create sequences (users with 1000+ karma see an obvious button in the user menu). If you go to /library page, you'll find a Create Sequence button.
So if you want to go the sequence route, I'd just create new posts from scratch, one a time, spaced out a couple days apart. (You'll get more engagement this way. I cry a little inside when I see users write magnum opuses that they create nicely formatted sequences for... and then post all at once, which is overwhelming and people don't read)
Relatedly, I'd crosspost old content over at a rate of around 1-per-2-days, and check to see which sort of content gets engagement/upvotes/comments.
That was the first thing I did when I created an account here.
Oops - I didn't notice the 'load more' option for the posts on your profile earlier, I upvoted your post now.
I have not yet written any posts myself and have only skimmed the detailed rules about karma some time ago, but I can easily imagine that the measures against spam can sometimes lead good posts from new accounts to be overlooked.
Huh, so I'd never heard this saying before, and when I google it I mostly find people saying "well, you know how the saying goes, if you spot it, you got it!", and I keep being like "I get it but... but I didn't know how the saying goes!"
Frequently, I'll be having an argument with someone. And I'll think "Grr! They are doing Obnoxious Behavior X!" or "Arg, they aren't doing Obviously Good Behavior Y!".
Then I complain at them.
And... sometimes, they make the exact same complaint about me.
And then I think about it, and it turns out to be true.
Another portion of the time, they don't complain back at me, but the argument goes into circles and doesn't resolve, and we both feel frustrated for awhile. And later, independently, I realize "Oh, I was also failing to do Good Thing X, or doing Bad Thing Y."
Often, "Good Thing X" and "Bad Thing Y" amount to some kind of "not listening", or "not doing enough interpretive labor." It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them.
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
And, typically, I haven't actually been making that same check, for myself. Or, I've done, it but in a kinda superficial way.
Often, adversarial conversational moves beget more adversarial conversational moves. If someone is talking over me, I'm likely to respond by talking over them. If someone seems to be ignoring my arguments, I'm more likely to ignore their arguments. This means, by the time a complaint arises to my conscious thought, there's a decent chance there's been some kind of escalation cycle where I started doing the same thing, even if they started it. (And, maybe, I started it?)
This has led me to a general habit:
I've mentioned this concept before in passing, but it seemed important enough to warrant it's own top-level post.