By coincidence, I just finished up my summary of A Social History of Truth for LW. One of its core claims is that the "social graces" of English gentility were a fundamental component of the Royal Society and the beginnings of empirical science. Some key ingredients:
The claim is that the originators of the Royal Society were, among other things, concerned with keeping the conversation going. If experiments over here conflicted with observations over there, rather than trying to immediately settle which was correct, they wanted to relax and observe; maybe there's a difference betw...
The advice this post points to is probably useful for some people, but I think LessWrongers are the last people who need to be told to be less socially graceful in favor of more epistemic virtue. So much basic kindness is already lacking in the way that many rationalists interact, and it's often deeply painful to be around.
Also, I just don't really buy that there's a necessary, direct tradeoff between epistemic virtue and social grace. I am quite blunt, honest, and (I believe) epistemically virtuous, but I still generally interact in a way that endears me to people and makes them feel listened to and not attacked. (If you know me feel free to comment/agree/disagree on this statement.) I'm not saying that all of my interactions are 100% successful in this regard but I think I come across as basically kind and socially graceful without sacrificing honesty or epistemics.
I think this post is pointing at an important consideration, but I want to flag it doesn't acknowledge or address my own primary cruxes, which focus on "what social patterns generate, in humans, the most intellectual progress over time." This feels related to Vaniver's comment.
One sub-crux is "people don't get sick of you and stop talking to you" (or, people get sick of a given discussion area being drama-prone)
Another sub-crux is "phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).
My overall claim is that thick skin, social courage(and/or obliviousness), and tact are all epistemic virtues.
I see you arguing for thick skin and social courage/obliviousness and I agree, but your arguments prove too much and don't seem to engage at all with the actual social question of how to build a truthseeking institution and don't seem to explore much where tact is actually important.
To be clear: I think it's an important virtue to cultivate thick skin, and the ability to hear unpleasant feedba...
reasons for not having to learn tact
This formulation presupposes that Zack doesn’t know how to phrase things “tactfully”. Is that the case? Or, is it instead the case that he knows how, but doesn’t think that it’s a good idea, or doesn’t think it’s worth the effort, or some other such thing?
Well, it wouldn't be tactful to suggest that I know how to be tactful and am deliberately choosing not to do so.
It's similar (I definitely felt it was a good faith attempt and captured at least some of it).
But I think the type-signature of what I meant was more like "a physiological response" than like "a belief about what will happen". I do think people are more likely to have that physiological response if they feel their interests are threatened, but there's more to it than that.
Here are a few examples worth examining:
The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.
I don’t think this is true.[1] Now, you say, by way of expansion:
There, you don’t have to worry whether people don’t like you and are planning to harm your interests. They’ll tell you.
And that’s true. But does this (and all the other ways in which “radical honesty” manifests) actually translate into “simpler, clearer, easier to navigate”?
It seems to me that one of the things that makes our society fairly simple to navigate most of the time is that you can act as if everyone around you doesn’t care about you one way or the other, and will behave toward you in the ways prescribed by their professional and other formal obligations, and otherwise will neither help nor hinder you. Of course there are many important exceptions, but this is the default state. Its great virtue is that it vastly reduces the amount of “social processing” that we have to do as we go about our daily lives, freeing up our cognitive resources for other things—and enabling our modern technological civilization to exist.
Of course, this default state is accomplished partly by actually having most people mostly not care...
I think a society without lying would have other means of maintaining the social interface layer. For instance, when queried about how they feel about you, people might say things like "I quite dislike you, but don't have any plans on acting on it, so don't worry about it". In our world this would be a worrying thing to hear, but in the hypothetical, you could just go on with your day without thinking about it further.
Note that the Feynman anecdote contrasts Feynman's bluntness against everyone else's "too scared to speak up". There's no one in the story who says "I don't think that will work" instead of "that won't work", or "that seems like a bad idea" instead of "that's a damn fool idea". You assert afterwards that such a person would have been distracted from the thing Bohr wanted, but the anecdote doesn't particularly support or discredit that idea.
Disagree. Social graces are not only about polite lies but about social decision procedures on maintaining game theoretic equilibria to maintain cooperation favoring payoff structures.
I've observed the thesis posited here before IRL and it appeared to be motivated reasoning about the person's underlying proclivity towards disagreeableness. I can sympathize as I used to test in the 98th percentile on disagreeableness, but realized this was a bad strategy and ameliorated it somewhat.
A slight variation on this, that's less opinionated about whether the payoff structures are actually "better" (which I think varies, sometimes the equilibria is bad and it's good to disrupt it), it's that at the very least, there is some kind of equilibria, and being radically honest or blunt doesn't just mean "same situation but with more honesty", it's "pretty different situation in the first place."
Like, I think "the Invention of Lying" example is notably an incoherent world that doesn't make any goddamn sense (and it feels sort of important that the OP doesn't mention this). In the world where everyone was radically honest, you wouldn't end up with "current dating equilibria but people are rude-by-current standards", you'd end up in some entirely different dating equilibria.
This seems to assume that social graces represent cooperative social strategies, rather than adversarial social strategies. I don't think this is always the case.
Consider a couple discussing where to go to dinner. Both keep saying 'oh, I'm fine to go anywhere, where do you want to go?' This definitely sounds very polite! Much more socially-graceful than 'I want to go to this place! We leave at 6!'
Yet I'd assert that most of the time these people are playing social games adversarially against one another.
If you name a place and I agree to go there (especially if I do so in just the right tone of pseudo-suppressed reluctance), it feels like you owe me one.
If you name a place and then something goes wrong - the food is bad, the service is slow, there is a long wait - it feels like I can blame you for that.
What looks like politeness is better thought of as these people fighting one another in deniable and destructive ways for social standing. Opting out of that seems like a good thing: if the Invention Of Lying people say 'I would like to go to this place, but not enough to pay large social costs to do so,' that seems more honest and more cooperative.
I am skeptical of this account, because I’m pretty high on disagreeableness, but have never particularly felt compelled to practice “radical honesty” in social situations (like dating or what have you).
It seems to me (as I describe in my top-level comment thread) that “not being radically honest, and instead behaving more or less as socially prescribed” has its quite sensible and useful role, but also that trying to enforce “social graces” in situations where you’re trying to accomplish some practical task is foolish and detrimental to effectiveness. I don’t see that there’s any contradiction here; and it seems to me that something other than “disagreeableness” is the culprit behind any errors in applying these generally sensible principles.
I think this misses the extent to which a lot of “social grace” doesn't actually decrease the amount of information conveyed; it's purely aesthetic — it's about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she's a little out of your league” instead of saying “you're ugly”. But you expect the ugly man to recognise the script you're using, and grok that you're telling him he's ugly! The same actual, underlying information is conveyed!
The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it's the person who doesn't get it who would be considered “to blame”.)
Obviously it's not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it's unnecessary at best”. But I don't grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.
What is the function of pretend-obfuscation, though? I don't think that the brainpower expenditure of encrypting conversations so that other people can decrypt them again is unnecessary at best; I think it's typically serving the specific function of using the same message to communicate to some audiences but not others, like an ambiguous bribe offer that corrupt officeholders know how to interpret, but third parties can't blow the whistle on.
In general, when you find yourself defending against an accusation of deception by saying, "But nobody was really fooled", what that amounts to is the claim that anyone who was fooled, isn't "somebody".
(All this would be unnecessary if everyone wanted everyone else to have maximally accurate beliefs, but that's not what social animals are designed to do.)
I basically expect this style of analysis to apply to "more pleasant ways to get the point across", but in a complicated way that doesn't respect our traditional notions of agency and personhood. If there's some part of my brain that takes offense at hearing overtly negative-valence things about me, "gentle" negative feedback that avoids triggering that part could be said to be "deceiving" it ...
I don't personally think I'd benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they'd benefit from this [...] are mistaken.
I find this claim surprising and would be very interested to hear more about why you think this!!
I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there's a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)
there does not, as far as I'm aware, exist any intellectually generative community which operates on the norms you're advocating for
How large does something need to be in order to be a "community"? Anecdotally, my relationships with my "fighty"/disagreeable friends seem more intellectually generative than the t...
But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone
I think this could be considered to be a sort of "residue" of the sort of deception Zack is talking about. If you imagine agents with different levels of social savviness, the savviest ones might adopt a deceptively polite phrasing, until the less savvy ones catch on, and so on down the line until everybody can interpret the signal correctly. But now the signaling equilibrium has shifted, so all communication uses the polite phrasing even though no one is fooled. I think this is probably the #2 source of deceptive politeness, with #1 being management of people's immediate emotional reactions, and #3 ongoing deceptiveness.
While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]
For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.
But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.
And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to the...
By all means, strategically violate social customs. But if you irritate people by doing it, you may be advancing your own epistemics by making them talk to you, but you're actually hurting their epistemics by making them irritated with whatever belief you're trying to pitch. Lack of social grace is very much not an epistemic virtue.
This post captures a fairly common belief in the rationalist community. It's important to understand why it's wrong.
Emotions play a strong role in human reasoning. I finally wrote up at least a little sketch of why that happens. The technical term is motivated reasoning.
Motivated reasoning/confirmation bias as the most important cognitive bias
I kinda like this post, and I think it's pointing at something worth keeping in mind. But I don't think the thesis is very clear or very well argued, and I currently have it at -1 in the 2023 review.
Some concrete things.
Strong downvote. The post looks loosy. The relation between social grace, honesty and truth seeking is complicated and multidimentional. You didn't engaged with this complexity. You didn't properly argued your point. You made a statement then vaguely gestured in the direction of two examples.
The first example is not only fictional, but isn't even really relevant. The world without lies is in a way nicer to live in because people reveal more information to you. It doesn't make you a supperior truth seeker. Now, would I prefer to live in such world? Sure, me and every other autistic person. But this is axiological issue not epistemological one.
The second example is more on point. It shows that it is epistemically useful to be able to talk to someone ignoring status concerns, especially when people need it. This is the point I completely agree with. However it doesn't generalises to "It's always epistemically better to lack any social grace". Because 1) the same tool isn't the best for every job 2) social grace isn't just about status concerns.
There is a potential interisting conversation with lots of nuance to be had here which a supperior version of this post woul...
This is kind of an aside, but does this Feynman story strike anyone else as off? Its kind of too perfect. Not even subtly. It strikes me as "significantly exaggerated", at the very least.
My thinking on this point is that the only proper way to respect a great work is to treat it with the same fire that went into making it. Grovelling at Niels Bohr's feet is not as respectful as contending with his ideas and taking them seriously — and expending great mental effort on an intense, focused interlocution is an act of profound respect.
There's a difference between that and discourtesy like what is displayed in the movie scene. Extending courtesy to a kind and virtuous person is a simple matter of justice. Comparing his face to a frog is indelicate, whereas admitting plainly that you find him unattractive is equally as honest without being as hurtful. If he wants a more specific inventory of his physical flaws, he can ask for elaboration.
Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? ...
While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics
Here's something I wrote earlier today: "I thought transactions wouldn't cause "wait for lock" unless requested explicitly, and I don't think we request it explicitly. But maybe I'm wrong there?"
I don't fully remember my epistemic state at the time, but I think I was pretty confident on both counts. But as it happens, I was wrong on the first count. This is the crucial piece of information we needed to understand what we were investigating.
I can imagine that I might instead have written "transactions won't cause "wait for lock" unless requested expl...
The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes, because sometimes social grace calls for obfuscating shared maps.
Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.
Accuracy of shared maps is quantitative. A culture that's optimized for social grace isn't going to make people wrong about everything, and could make people less wrong about many things relative to many less graceful alternative cultures. (At minimum, if you're not allowed be confident, you can't be overconfident; if you're not allowed to talk about what's inside someone's head, you can't be wrong about what's inside someone's head.)
Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.
This sounds like it's contrasting "criticism for being unjustified" against "criticism for social status regulation". But those aren't the same use of the word "for", much like it would be weird to contrast "locking someone up for murder" against "locking someone up for deterrence". (Though "for deterrence" might be a different "for" again, I'm not sure.)
To unpack, when I said
I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.
I didn't intend to support someone being like "I want to do some social status regulation and I'm going to do it by tone policing some unjustified confidence". I meant to support "this is unjustified confidence, I want less of this and to that end I'm going to do some social status regulation through the mechanism of tone policing". I can't tell if you're yay-that or boo-...
"Be like Feynman" is great advice for 0.01% of the population, and horrible for 99% (and irrelevant to the remainder). In order to be valued for bluntness, one must be correct insanely often. Otherwise, you have to share evidence rather than conclusions, and couching it in more pleasant terms makes it much more tolerable (again, for most but not all).
I do want to react to:
There, you don't have to worry whether people don't like you and are planning to harm your interests
Wait, that's if THEY CANNOT lie, not if you choose not to. Unilatera...
To a decision-theoretic agent, the value of information is always nonnegative
This seems false. If I selectively give you information in an adversarial manner, and you don't know that I'm picking the information to harm you, I think it's very clear that the value of the information you gain can be strongly negative.
A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.
And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves.
This is not irrational behavior, given human goals.
The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.
If you only remove lying, you end up with a world that contains a lot more of the negative consequences socially sanctioned lying is intended to avoid -- hurt feelings and so on.
To a decision-theoretic agent, the value of information is always nonnegative.
A boundary around one's mind enforced by a norm of not mind-reading people seems useful. When working on a problem, thoughts on that problem are appropriate to reveal, and counterproductive to drown in social graces, but that says little about value of communicating everything that's feasible to communicate.
For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.
If you have bad characteristics (e.g. you steal from your acquaintances), isn't it in your best interest to make sure this doesn't become common knowledge? You don't...
Someone once told me that they thought I acted like refusing to employ the bare minimum of social grace was a virtue, and that this was bad. (I'm paraphrasing; they actually used a different word that starts with b.)
I definitely don't want to say that lack of social grace is unambiguously a virtue. Humans are social animals, so the set of human virtues is almost certainly going to involve doing social things gracefully!
Nevertheless, I will bite the bullet on a weaker claim. Politeness is, to a large extent, about concealing or obfuscating information that someone would prefer not to be revealed—that's why we recognize the difference between one's honest opinion, and what one says when one is "just being polite." Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace. In this sense, we might say that the lack of social grace is an "epistemic" virtue—even if it's probably not great for normal humans trying to live normal human lives.
Let me illustrate what I mean with one fictional and one real-life example.
The beginning of the film The Invention of Lying (before the eponymous invention of lying) depicts an alternate world in which everyone is radically honest—not just in the narrow sense of not lying, but more broadly saying exactly what's on their mind, without thought of concealment.
In one scene, our everyman protagonist is on a date at a restaurant with an attractive woman.
"I'm very embarrassed I work here," says the waiter. "And you're very pretty," he tells the woman. "That only makes this worse."
"Your sister?" the waiter then asks our protagonist.
"No," says our everyman.
"Daughter?"
"No."
"She's way out of your league."
"... thank you."
The woman's cell phone rings. She explains that it's her mother, probably calling to check on the date.
"Hello?" she answers the phone—still at the table, with our protagonist hearing every word. "Yes, I'm with him right now. ... No, not very attractive. ... No, doesn't make much money. It's alright, though, seems nice, kind of funny. ... A bit fat. ... Has a funny little—snub nose, kind of like a frog in the—facial ... No, I won't be sleeping with him tonight. ... No, probably not even a kiss. ... Okay, you too, 'bye."
The scene is funny because of how it violates the expected social conventions of our own world. In our world, politeness demands that you not say negative-valence things about someone in front of them, because people don't like hearing negative-valence things about themselves. Someone in our world who behaved like the woman in this scene—calling someone ugly and poor and fat right in front of them—could only be acting out of deliberate cruelty.
But the people in the movie aren't like us. Having taken the call, why should she speak any differently just because the man she was talking about could hear? Why would he object? To a decision-theoretic agent, the value of information is always nonnegative. Given that his date thought he was unattractive, how could it be worse for him to know rather than not-know?
For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.
The world of The Invention of Lying is simpler, clearer, easier to navigate than our world. There, you don't have to worry whether people don't like you and are planning to harm your interests. They'll tell you.
In "Los Alamos From Below", physicist Richard Feynman's account of his work on the Manhattan Project to build the first atomic bomb, Feynman recalls being sought out by a much more senior physicist specifically for his lack of social graces:
Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? And in general, shouldn't you know who you're talking to? Wasn't Bohr, the Nobel prize winner, more likely to be right than Feynman, the fresh young Ph.D. (at the time)?
While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics, which is what Bohr wanted out of Feynman—and, incidentally, what I want out of my readers. I would not expect readers to confirm interpretations with me before publishing a critique. If the post looks lousy, say it looks lousy. If it looks good, say it looks good. Simple proposition.