Someone once told me that they thought I acted like refusing to employ the bare minimum of social grace was a virtue, and that this was bad. (I'm paraphrasing; they actually used a different word that starts with b.)

I definitely don't want to say that lack of social grace is unambiguously a virtue. Humans are social animals, so the set of human virtues is almost certainly going to involve doing social things gracefully!

Nevertheless, I will bite the bullet on a weaker claim. Politeness is, to a large extent, about concealing or obfuscating information that someone would prefer not to be revealed—that's why we recognize the difference between one's honest opinion, and what one says when one is "just being polite." Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace. In this sense, we might say that the lack of social grace is an "epistemic" virtue—even if it's probably not great for normal humans trying to live normal human lives.

Let me illustrate what I mean with one fictional and one real-life example.


The beginning of the film The Invention of Lying (before the eponymous invention of lying) depicts an alternate world in which everyone is radically honest—not just in the narrow sense of not lying, but more broadly saying exactly what's on their mind, without thought of concealment.

In one scene, our everyman protagonist is on a date at a restaurant with an attractive woman.

"I'm very embarrassed I work here," says the waiter. "And you're very pretty," he tells the woman. "That only makes this worse."

"Your sister?" the waiter then asks our protagonist.

"No," says our everyman.

"Daughter?"

"No."

"She's way out of your league."

"... thank you."

The woman's cell phone rings. She explains that it's her mother, probably calling to check on the date.

"Hello?" she answers the phone—still at the table, with our protagonist hearing every word. "Yes, I'm with him right now. ... No, not very attractive. ... No, doesn't make much money. It's alright, though, seems nice, kind of funny. ... A bit fat. ... Has a funny little—snub nose, kind of like a frog in the—facial ... No, I won't be sleeping with him tonight. ... No, probably not even a kiss. ... Okay, you too, 'bye."

The scene is funny because of how it violates the expected social conventions of our own world. In our world, politeness demands that you not say negative-valence things about someone in front of them, because people don't like hearing negative-valence things about themselves. Someone in our world who behaved like the woman in this scene—calling someone ugly and poor and fat right in front of them—could only be acting out of deliberate cruelty.

But the people in the movie aren't like us. Having taken the call, why should she speak any differently just because the man she was talking about could hear? Why would he object? To a decision-theoretic agent, the value of information is always nonnegative. Given that his date thought he was unattractive, how could it be worse for him to know rather than not-know?

For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world. There, you don't have to worry whether people don't like you and are planning to harm your interests. They'll tell you.


In "Los Alamos From Below", physicist Richard Feynman's account of his work on the Manhattan Project to build the first atomic bomb, Feynman recalls being sought out by a much more senior physicist specifically for his lack of social graces:

I also met Niels Bohr. His name was Nicholas Baker in those days, and he came to Los Alamos with Jim Baker, his son, whose name is really Aage Bohr. They came from Denmark, and they were very famous physicists, as you know. Even to the big shot guys, Bohr was a great god.

We were at a meeting once, the first time he came, and everybody wanted to see the great Bohr. So there were a lot of people there, and we were discussing the problems of the bomb. I was back in a corner somewhere. He came and went, and all I could see of him was from between people's heads.

In the morning of the day he's due to come next time, I get a telephone call.

"Hello—Feynman?"

"Yes."

"This is Jim Baker." It's his son. "My father and I would like to speak to you."

"Me? I'm Feynman, I'm just a—"

"That's right. Is eight o'clock OK?"

So, at eight o'clock in the morning, before anybody's awake, I go down to the place. We go into an office in the technical area and he says, "We have been thinking how we could make the bomb more efficient and we think of the following idea."

I say, "No, it's not going to work. It's not efficient ... Blah, blah, blah."

So he says, "How about so and so?"

I said, "That sounds a little bit better, but it's got this damn fool idea in it."

This went on for about two hours, going back and forth over lots of ideas, back and forth, arguing. [...]

"Well," [Niels Bohr] said finally, lighting his pipe, "I guess we can call in the big shots now." So then they called all the other guys and had a discussion with them.

Then the son told me what happened. The last time he was there, Bohr said to his son, "Remember the name of that little fellow in the back over there? He's the only guy who's not afraid of me, and will say when I've got a crazy idea. So the next time when we want to discuss ideas, we're not going to be able to do it with these guys who say everything is yes, yes, Dr. Bohr. Get that guy and we'll talk with him first."

I was always dumb in that way. I never knew who I was talking to. I was always worried about the physics. If the idea looked lousy, I said it looked lousy. If it looked good, I said it looked good. Simple proposition.

Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? And in general, shouldn't you know who you're talking to? Wasn't Bohr, the Nobel prize winner, more likely to be right than Feynman, the fresh young Ph.D. (at the time)?

While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics, which is what Bohr wanted out of Feynman—and, incidentally, what I want out of my readers. I would not expect readers to confirm interpretations with me before publishing a critique. If the post looks lousy, say it looks lousy. If it looks good, say it looks good. Simple proposition.

New Comment
96 comments, sorted by Click to highlight new comments since: Today at 8:50 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Vaniver9mo6321

By coincidence, I just finished up my summary of A Social History of Truth for LW. One of its core claims is that the "social graces" of English gentility were a fundamental component of the Royal Society and the beginnings of empirical science. Some key ingredients:

  1. Honor culture that highly valued reputation and honesty, which viewed calling someone a liar as a grounds for dueling, which led to cautious statements, careful disagreements, and hypothesizing on how everyone might be right
  2. Idleness culture that valued conversation as an art form / game where the right move is one that allows for a response (like a variant of tennis where the goal is to have the other party return the volley, rather than be unable to return it)
  3. A negative view of scholarly pedantic argumentative culture, which viewed reputation as a zero-sum game and was detached from worldly considerations.

The claim is that the originators of the Royal Society were, among other things, concerned with keeping the conversation going. If experiments over here conflicted with observations over there, rather than trying to immediately settle which was correct, they wanted to relax and observe; maybe there's a difference betw... (read more)

[-]Raemon9mo4827

I think this post is pointing at an important consideration, but I want to flag it doesn't acknowledge or address my own primary cruxes, which focus on "what social patterns generate, in humans, the most intellectual progress over time." This feels related to Vaniver's comment. 

One sub-crux is "people don't get sick of you and stop talking to you" (or, people get sick of a given discussion area being drama-prone)

Another sub-crux is "phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).

My overall claim is that thick skin, social courage(and/or obliviousness), and tact are all epistemic virtues. 

I see you arguing for thick skin and social courage/obliviousness and I agree, but your arguments prove too much and don't seem to engage at all with the actual social question of how to build a truthseeking institution and don't seem to explore much where tact is actually important.

To be clear: I think it's an important virtue to cultivate thick skin, and the ability to hear unpleasant feedba... (read more)

4Said Achmiz9mo
This formulation presupposes that Zack doesn’t know how to phrase things “tactfully”. Is that the case? Or, is it instead the case that he knows how, but doesn’t think that it’s a good idea, or doesn’t think it’s worth the effort, or some other such thing?
5Zack_M_Davis9mo
Well, it wouldn't be tactful to suggest that I know how to be tactful and am deliberately choosing not to do so.
0Said Achmiz9mo
It seems to me like this points to some degree of equivocation in the usage of “tact” and related words. As I’ve seen the words used, to call something “tactless” is to say that it’s noticeably and unusually rude, lacking in politeness, etc. Importantly, one would never describe something as “tactless” which could be described as “appropriate”, “reasonable”, etc. To call an action (including a speech act of any sort) “tactless” is to say that it’s a mistake to have taken that action. It’s the connotations of such usage which are imported and made use of, when one accuses someone of lacking “tact”, and expects third parties to condemn the accused, should they concur with the characterization. But the way that I see “tact” used in these discussions we’ve been having (including in Raemon’s top-level comment at the top of this comment thread) doesn’t match the above-described usage. Rather, it seems to me to refer to some practice of going beyond what might be called “appropriate” or “reasonable”, and actually, e.g., taking various positive steps to counteract various neuroses of one’s interlocutor. But if that is what we mean by “tact”, then it hardly deserves the connotations that the usual usage comes with!
5Zack_M_Davis9mo
Isn't the whole problem that different people don't seem to agree on what's reasonable or appropriate, and what's normal human behavior rather than a dysfunctional neurosis? I don't think equivocation is the problem here; I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.
3Said Achmiz9mo
No, I don’t think so. That is—it’s true that different people don’t always agree on this, but I don’t think this is the problem. Why? Because when you use words like “tact” (and “tactful”, “tactless”, etc.), you implicitly refer to what’s acceptable in society as a whole (or commonly understood to be acceptable in whatever sort of social context you’re in). (Otherwise, what you’re talking about isn’t “tact” or “social graces”, but something else—perhaps “consideration”, or “solicitousness”, or some such?) Making that case is good, but that’s a separate matter. EDIT: Let me clarify something that may perhaps not have been obvious: The reason I said (in the grandparent) that “[the preceding exchange] seems to me like this points to some degree of equivocation in the usage of “tact” and related words” is the following apparent paradox: On the ordinary meaning of the word “tact” (as it’s used in wider society, beyond Less Wrong), deliberately choosing not to employ tact is usually a bad thing (i.e., not justified by any reasonable personal goal, and detrimental to most plausible collective goals). But as Raemon seems to be using the word “tact”, deliberately choosing not to employ tact seems not just unproblematic, but often actively beneficial, and sometimes (given some plausible personal and/or collective goals) even ethically obligatory! This strongly suggests that these two usages of the word “tact” in fact refer to two very different things.
3Zack_M_Davis9mo
Thanks for articulating a specific way in which you think I'm being systematically dumb! This is super helpful, because it makes it clear how to proceed: I can either bite the bullet ("Yes, and I'd be right to keep generating such reasons, because ...") or try to provide evidence that I'm not being stupid in that particular way. As it happens, I do not want to bite this bullet; I think I'm smarter than your model of me, and I'm eager to prove it by addressing your cruxes. (I wouldn't expect you to take my word for it.) I agree that this is a real risk![1] You mention Vaniver's comment, which mentions that the Royal Society prioritized keeping the conversation going. I think I also prioritize this: in yet-unpublished work,[2] I talk about how in politically charged Twitter discussions, I sometimes try to use the minimal amount of strategic bad faith needed to keep the discussion going, when I suspect my interlocutor would hang up the phone if they knew what I was really thinking. All other things being equal, I agree that this is a relevant consideration. Correspondingly, I think I do pay a fair amount of attention to word choice depending on what I'm trying to convey to what audience. I admit that I often end up going with a relatively "fighty" tone when it feels appropriate for what I'm trying to do, but ... I also often don't? If someone wanted to persuade me to change my policy here, I'd need specific examples of things I've written that are allegedly making people feel unsafe. I suspect a crux there is that I'm more likely to interpret feelings of unsafety as a decision-theoretic extortion attempt, that sometimes people feel unsafe because the elephant in their brain can predict that others will offer to distort shared maps as a concession to make them feel safe. Did you notice how I started this comment by thanking you for expressing a negative opinion of my rationality? That was very deliberate on my part: I'm trying to make it cheap to criticize me. It m
2Said Achmiz9mo
What is meant by “safe” in this context? EDIT: Same question re: “triggery”.
6Zack_M_Davis9mo
People feel "safe" when their interests aren't being threatened. (Usually the relevant interests are social in nature; we're not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first. I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, "That's not what you said earlier! Were you lying then, or are you lying now, huh?!" but that on Forum B, other commenters are likely to say something like, "This seems in tension with what you said earlier; could you clarify?" The culture of Forum B seems better at making it feel "safe" to change one's mind without one's social interest in not-being-called-a-liar being threatened. I'm sure you can think of reasons why this illustration doesn't address most appeals to "safety" on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (You don't believe in interpretive labor, but Ray doesn't believe in answering all of Said's annoying questions, so it's my job to fill in the gap.)
7tailcalled9mo
In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.
4Said Achmiz9mo
Hmm, I see. That usage makes sense in the context of the hypothetical example. But— … indeed. Thanks! However, I have a follow-up question, if you don’t mind: Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?
2Zack_M_Davis9mo
I think I'll go up to 85% confidence that Raemon will affirm the grandparent as a "close enough" explanation of what he means by safe. ("Close enough" meaning, I don't particularly expect Ray to have thought about how to reduce the meaning of safe and independently come up with the same explanation as me, but I'm predicting that he won't report major disagreement with my account after reading it.)
[-]Raemon9mo219

It's similar (I definitely felt it was a good faith attempt and captured at least some of it).

But I think the type-signature of what I meant was more like "a physiological response" than like "a belief about what will happen". I do think people are more likely to have that physiological response if they feel their interests are threatened, but there's more to it than that.

Here are a few examples worth examining:

  1. On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob's interests, B) indicates they don't understand that they have threatened Bob's interests (so they aren't even tracking it as a cost/concern)
     
  2. #1, but Alice does convey they understood Bob's interests, and thinks in this case it's worth sacrificing them for some other purpose
     
  3. Same as #1, but on a private slack channel (where Bob doesn't visceral feel the thing is likely to immediately spiral out of control)
     
  4. Same as #1, but it's in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
     
  5. Same as #4, but the conversation by the fireplace is being broadcast live to the world. 
     
  6. Same as #4 (threaten
... (read more)
1Said Achmiz9mo
In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…? In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address. On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!) ---------------------------------------- 1. The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”. ↩︎
-1Raemon9mo
I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)
8Zack_M_Davis9mo
What's in it for me? Suppose I've never heard of this—troop-tricking comity?—or whatever it is you said. Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return? It's got to be something really good, right? Because if you couldn't point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.
3Said Achmiz9mo
This sort of “naive utilitarianism” is a terrible idea for reasons which we are (or should be!) very well familiar with.
0Said Achmiz9mo
I think that this is very wrong, in multiple ways. First and most obviously, if such “more tactful”[1] formulations cost more to produce, then that is a way in which using them would not be strictly better, even if it was better on net. Second, even if the “more tactful” formulations are no more costly to produce, they are definitely more costly to read (or otherwise parse), for at least some (and possibly most) readers (or hearers, etc.). (Simple length is one obvious reason for this, though not the only one by any means; complexity, ambiguity, etc., also contribute.) Third, if the “more tactful” formulations are less effective (and not merely less efficient!)—for example, by increasing the probability of communication errors—then using them would be directly detrimental, even ignoring any costs that doing so might impose. Fourth, if “less tactful” formulations act as a filter against people who are more easily “triggered”, who are more likely to become annoyed at lack of “tact”, who are prone to entering a “political frame”, etc., and if, furthermore, having such people is detrimental on net (perhaps because communicating productively with them imposes various costs, or perhaps because they have a tendency to attempt to force changes to local communicative or other practices, which are harmful to the goal or the organization), then it is in fact good to use “less tactful” formulations precisely because they “trigger people”, “make people annoyed enough that they leave”, etc. It is possible that an intellectual community should expect that people are capable of doing this, but also that said community should expect, not only that people are also capable of not doing this, but in fact that they actually don’t do this. ---------------------------------------- 1. I am not sure if this is a short summary label which you’d endorse; you use the word “tact” elsewhere in your comment, so it seemed like a decent guess. If not, feel free to provide a comparably compa

The advice this post points to is probably useful for some people, but I think LessWrongers are the last people who need to be told to be less socially graceful in favor of more epistemic virtue. So much basic kindness is already lacking in the way that many rationalists interact, and it's often deeply painful to be around.

Also, I just don't really buy that there's a necessary, direct tradeoff between epistemic virtue and social grace. I am quite blunt, honest, and (I believe) epistemically virtuous, but I still generally interact in a way that endears me to people and makes them feel listened to and not attacked. (If you know me feel free to comment/agree/disagree on this statement.) I'm not saying that all of my interactions are 100% successful in this regard but I think I come across as basically kind and socially graceful without sacrificing honesty or epistemics.

0Said Achmiz9mo
I would certainly have thought this, but recent experience has shown the diametric opposite to be true. The OP’s advice is sorely needed here more than almost anywhere else. In particular, it is not just that LessWrongers need to be told to be less socially graceful, but—and especially—that they need to be told to demand less “social grace” (if what’s demanded even deserves such a respectful term) from others. I agree with this. But it’s precisely the “basic kindness” which doesn’t interfere with “epistemic virtues” that rationalists are unusually bad at; and, conversely, precisely the “basic kindness” (though, again, I consider this to be a tendentious description in that case) which does interfere with “epistemic virtues” that’s mostly commonly demanded. This leaves us with the worst of both worlds. I do not know you personally, so I certainly can’t dispute nor affirm this claim. But it does seem to me to be an entirely plausible claim… … if, and only if, we construe “social grace” in such a way that rules out its interference with epistemics (cf. this comment). Now, I think that this is a reasonable use of the term “social grace” (and for this reason I think that Zack has made a somewhat unfortunate word choice in the post’s title). The trouble is, such a construal makes your claim a question-begging one. And if what you mean is that, for example, in a scenario like the Feyman story in the OP, you would nevertheless attend to social status, behave with deference, couch your disagreements in qualifications, avoid outright saying to people’s faces that they’re wrong or that their idea is bad, etc., etc., well… then I think that your claim that such “social grace” doesn’t interfere with “epistemic virtue” is just flat-out false.

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.

I don’t think this is true.[1] Now, you say, by way of expansion:

There, you don’t have to worry whether people don’t like you and are planning to harm your interests. They’ll tell you.

And that’s true. But does this (and all the other ways in which “radical honesty” manifests) actually translate into “simpler, clearer, easier to navigate”?

It seems to me that one of the things that makes our society fairly simple to navigate most of the time is that you can act as if everyone around you doesn’t care about you one way or the other, and will behave toward you in the ways prescribed by their professional and other formal obligations, and otherwise will neither help nor hinder you. Of course there are many important exceptions, but this is the default state. Its great virtue is that it vastly reduces the amount of “social processing” that we have to do as we go about our daily lives, freeing up our cognitive resources for other things—and enabling our modern technological civilization to exist.

Of course, this default state is accomplished partly by actually having most people mostly not care... (read more)

I think a society without lying would have other means of maintaining the social interface layer. For instance, when queried about how they feel about you, people might say things like "I quite dislike you, but don't have any plans on acting on it, so don't worry about it". In our world this would be a worrying thing to hear, but in the hypothetical, you could just go on with your day without thinking about it further.

3cubefox9mo
We would also be perfectly used to it.
9Said Achmiz9mo
Let me note, as a counterpoint to the above comment, that I agree wholeheartedly with the post’s thesis (as expressed in the last two paragraphs). I just think that the film does not make for a very good illustration of the point. The Feynman anecdote (even if we treat it as semi-fictional itself) is a much better example, because it exhibits the key qualities of a situation where the argument applies most forcefully: 1. There is a clear objective; 2. The objective deals with physical reality, not social reality, so maneuvering in social reality can only hinder it, not help; 3. Everyone involved shares the formal goal of achieving the objective. In such a case, deploying the objections alluded to in the OP’s second-to-last paragraph is simply a mistake (or else deliberate sabotage, perhaps to further one’s own social aims, to the detriment of the common goal). We might perhaps find plausible justifications (or even good reasons), in everyday life, for considering people’s feelings about true claims, or for behaving in a way that signals recognition of social status, or what have you; but in a case where we’re supposed to be building a working nuclear weapon, or (say) solving AI alignment, it’s radically inappropriate—indeed, quite possibly collectively-suicidal—to carry on such obfuscations.
6Richard_Kennaway9mo
"Good fences make good neighbours." Honesty does not require blurting out everything that passes through one's stream of consciousness (or unconsciousness, as the case may be). To take the scene from The Invention of Lying, I am not interested in a waiter's opinions about anything but the menu, and as the man on the date I would bluntly (but not rudely) tell him so. Is it true? Is it relevant? Is it important? If the answer is no any of these, keep silent.

Disagree. Social graces are not only about polite lies but about social decision procedures on maintaining game theoretic equilibria to maintain cooperation favoring payoff structures.

I've observed the thesis posited here before IRL and it appeared to be motivated reasoning about the person's underlying proclivity towards disagreeableness. I can sympathize as I used to test in the 98th percentile on disagreeableness, but realized this was a bad strategy and ameliorated it somewhat.

[-]Raemon9mo2423

A slight variation on this, that's less opinionated about whether the payoff structures are actually "better" (which I think varies, sometimes the equilibria is bad and it's good to disrupt it), it's that at the very least, there is some kind of equilibria, and being radically honest or blunt doesn't just mean "same situation but with more honesty", it's "pretty different situation in the first place."

Like, I think "the Invention of Lying" example is notably an incoherent world that doesn't make any goddamn sense (and it feels sort of important that the OP doesn't mention this). In the world where everyone was radically honest, you wouldn't end up with "current dating equilibria but people are rude-by-current standards", you'd end up in some entirely different dating equilibria.

[-]aphyer9mo175

This seems to assume that social graces represent cooperative social strategies, rather than adversarial social strategies. I don't think this is always the case.

Consider a couple discussing where to go to dinner. Both keep saying 'oh, I'm fine to go anywhere, where do you want to go?' This definitely sounds very polite! Much more socially-graceful than 'I want to go to this place! We leave at 6!'

Yet I'd assert that most of the time this represents these people playing social games adversarially against one another.

If you name a place and I agree to go there (especially if I do so in just the right tone of pseudo-suppressed reluctance), it feels like you owe me one.

If you name a place and then something goes wrong - the food is bad, the service is slow, there is a long wait - it feels like I can blame you for that.

What looks like politeness is better thought of as these people fighting one another in deniable and destructive ways for social standing. Opting out of that seems like a good thing: if the Invention Of Lying people say 'I would like to go to this place, but not enough to pay large social costs to do so,' that seems more honest and more cooperative.

3GuySrinivasan9mo
I believe the common case of mutual "where do you want to go?" is motivated by not wanting to feel like you're imposing, not some kind of adversarial game. Maybe I'm bubbled though?
6Said Achmiz9mo
That is an adversarial game—the game of avoiding having to expend cognitive effort and/or “social currency”.
7GuySrinivasan9mo
No, that is a cooperative game that both participants are playing poorly.
2Said Achmiz9mo
This seems substantially less likely a priori. What convinced you of this?
3GuySrinivasan9mo
What convinced you that adversarial games between friends are more likely a priori? In my experience the vast majority of interactions between friends are cooperative, attempts at mutual benefit, etc. If a friend needs help, you do not say "how can I extract the most value from this", you say "let me help"*. Which I guess is what convinced me. And is also why I wrote "Maybe I'm bubbled though?" Is it really the case for you that you look upon people you think of as friends and say "ah, observe all the adversarial games"? *Sure, over time, maybe you notice that you're helping more than being helped, and you can evaluate your friendship and decide what you value and set boundaries and things, but the thing going through your head at the time is not "am I gaining more social capital from this than the amount of whatever I lose from helping as opposed to what, otherwise, I would most want to do". Well, my head.
4Said Achmiz9mo
Indeed not. Among my friends, the “mutual ‘where do you want to go?’ scenario” doesn’t happen in the first place. If it did, it would of course be an adversarial game; but it does not, for precisely the reason that adversarial games among friends are rare.
2Archimedes9mo
Adversarial gaming doesn't match my experience much at all and suggesting options doesn't feel imposing either. For me at least, it's largely about the responsibility and mental exertion of planning. In my experience, mutual "where do you want to go" is most often when neither party has a strong preference and neither feels like taking on the cognitive burden of weighing options to come to a decision. Making decisions takes effort especially when there isn't a clearly articulated set of options and tradeoffs to consider. For practical purposes, one person should provide 2-4 options they're OK with and the other person can pick one option or veto some option(s). If they veto all given options, they must provide their own set of options the first person can choose or veto. Repeat as needed but rarely is more than one round needed unless participants are picky or disagreeable.

I am skeptical of this account, because I’m pretty high on disagreeableness, but have never particularly felt compelled to practice “radical honesty” in social situations (like dating or what have you).

It seems to me (as I describe in my top-level comment thread) that “not being radically honest, and instead behaving more or less as socially prescribed” has its quite sensible and useful role, but also that trying to enforce “social graces” in situations where you’re trying to accomplish some practical task is foolish and detrimental to effectiveness. I don’t see that there’s any contradiction here; and it seems to me that something other than “disagreeableness” is the culprit behind any errors in applying these generally sensible principles.

7Eric Neyman9mo
This sounds interesting. For the sake of concreteness, could you give a couple of central examples of this?
6tailcalled9mo
Zack gives some examples in the post; do you have any examples to illustrate your point?
3interstice9mo
Do you disagree that lack of social grace is an epistemic virtue, though? Social skills might indeed be useful for maintaining cooperative coalitions, but this doesn't necessarily conflict with the thesis of the post. I guess some social graces don't involve polite lies(like saying "good morning" to people when meeting them) but a lot of them do, and I think those that do can only be explained by ongoing or past deception or short-range emotional management(arguably another sort of deception)

I think this misses the extent to which a lot of “social grace” doesn't actually decrease the amount of information conveyed; it's purely aesthetic — it's about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she's a little out of your league” instead of saying “you're ugly”. But you expect the ugly man to recognise the script you're using, and grok that you're telling him he's ugly! The same actual, underlying information is conveyed!

The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it's the person who doesn't get it who would be considered “to blame”.)

Obviously it's not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it's unnecessary at best”. But I don't grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.

What is the function of pretend-obfuscation, though? I don't think that the brainpower expenditure of encrypting conversations so that other people can decrypt them again is unnecessary at best; I think it's typically serving the specific function of using the same message to communicate to some audiences but not others, like an ambiguous bribe offer that corrupt officeholders know how to interpret, but third parties can't blow the whistle on.

In general, when you find yourself defending against an accusation of deception by saying, "But nobody was really fooled", what that amounts to is the claim that anyone who was fooled, isn't "somebody".

(All this would be unnecessary if everyone wanted everyone else to have maximally accurate beliefs, but that's not what social animals are designed to do.)

I basically expect this style of analysis to apply to "more pleasant ways to get the point across", but in a complicated way that doesn't respect our traditional notions of agency and personhood. If there's some part of my brain that takes offense at hearing overtly negative-valence things about me, "gentle" negative feedback that avoids triggering that part could be said to be "deceiving" it ... (read more)

8RobertM9mo
As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by "refusing to employ the bare minimum of social grace".  This does not require deception, though often it may require more effort to clear some threshold of "social grace" while communicating the same information. People vary widely, but: * I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor. * I don't personally think I'd benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they'd benefit from this (compared to counterfactuals like "they operate unchanged in their current social environment" or "they put in some additional marginal effort to say true things with more social grace") are mistaken. * Online conversations are one-to-many, not one-to-one.  This multiplies the potential cost of that cognitive hijacking. Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I'm aware, exist any intellectually generative community which operates on the norms you're advocating for, is evidence that such a community is (currently) unsustainable.

I don't personally think I'd benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they'd benefit from this [...] are mistaken.

I find this claim surprising and would be very interested to hear more about why you think this!!

I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there's a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)

there does not, as far as I'm aware, exist any intellectually generative community which operates on the norms you're advocating for

How large does something need to be in order to be a "community"? Anecdotally, my relationships with my "fighty"/disagreeable friends seem more intellectually generative than the t... (read more)

2RobertM9mo
Some costs: * Such people seem much more likely to also themselves be fairly disagreeable. * There are many fewer of them.  I think I've probably gotten net-positive value out of my interactions with them to date, but I've definitely gotten a lot of value out of interactions with many people who wouldn't fit the bill, and selecting against them would be a mistake. * To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population.  But the correlation isn't overwhelming, and selecting primarily for "low risk of being cognitively hijacked" would not get me the as much of the useful thing I actually want. As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment.  I agree that stronger social bonds between individuals will usually change the calculus on communication norms.  I also suspect that it's positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities. 1. ^ I think basically impossible in nearly all cases, but don't have legible justifications for that degree of belief.
5Zack_M_Davis9mo
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we're (ostensibly) trying to achieve on this website isn't overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart. And yet, I would not take this as a reason not to "structure communities like LessWrong in ways which optimize for participants being further along on this axis" for fear of "driving away [a ...] fraction of an existing community's membership". In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
2RobertM9mo
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
4Zack_M_Davis9mo
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what's going on here. If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes. But actually, there's no such thing as the God-Empress; there are individual men and women, and there are families. That's the context in which Said's plea to "keep your thumb off the scales, as much as possible" can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn't ask Her to keep Her thumb off the scales. What would that even mean?) In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call "not my problem". If I make a post, and you say, "This has too many equations in it; people don't want to read a website with too many equations; you're driving off more value to community than you're creating", it only makes sense to think of this as a disagreement if I've accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, "I thought it was a good post; if it drives away people who don't like equations, that's not my problem," then what we have is a conflict rather than a disagreement.
4Said Achmiz9mo
Indeed. In fact, we can take this analysis further, as follows: If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force. Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc. In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
3Said Achmiz9mo
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
6RobertM9mo
No, I meant that it's very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community.  Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community's membership; this is not a very interesting claim.  And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I've already asked for evidence on the subject.
2Said Achmiz9mo
Is it not? Why? In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.) As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
2Said Achmiz9mo
Would you include yourself in that 95%+? There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
2RobertM9mo
Probably; I think I'm maybe in the 80th or 90th percentile on the axis of "can resist being hijacked", but not 95th or higher. Can you list some?  On a reread, my initial claim was too broad, in the sense that there are many things that could be called "intellectually generative communities" which could qualify, but they mostly aren't the thing I care about (in context, not-tiny online communities where most members don't have strong personal social ties to most other members).
2Said Achmiz9mo
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift? I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.) As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.
2RobertM9mo
All else equal, better, of course.  (In reality, all else is rarely equal; at a minimum there are opportunity costs.) See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances. ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn't find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
9Said Achmiz9mo
Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
6RobertM9mo
The consequent does not follow.  It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis.  It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it's not free.
9Said Achmiz9mo
That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute. Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?
2Said Achmiz9mo
Sorry, I’m not quite sure which “previous response” you refer to. Link, please?
2RobertM9mo
https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=QQxjoGE24o6fz7CYm https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=Dy3uyzgvd2P9RZre6
2Said Achmiz9mo
So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?) Anyhow, please reconsider my claims, in light of this clarification.
2Said Achmiz9mo
This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.
5AnthonyC9mo
Choice of mode/aesthetics for conveying a message also conveys contextual information that often is useful. Who is this person, what is my relationship to them, what is their background, what do those things tell me about the likely assumptions and lenses through which they will be interpreting the things I say? In most cases verbal language is not sufficient to convey the entirety of a message, and even when it is, successful communication requires that the receiver is using the right tools for interpretation. Yes, in practice this can be (and is) used to hide corruption, enforce class and status hierarchies, and so on, in addition to the use case of caring about how the message affects the recipients emotional state. It can also be used to point at information that is taboo, in scenarios where two individuals are not close enough to have common knowledge of each others beliefs.  Or in social situations (which is all of them when we're communicating at all, the difference is one of degree) it can be used to test someone's intelligence and personality, seeing how adroit they are at perceiving and sending signals and messages.  See also this SSC post, if you haven't yet. Filter also through a lens of the fact that humans very often have to talk to, work with, and have lasting relationships with people they don't like, don't know very well outside a narrow context, and don't trust much. Norms that obscure information that isn't supposed to be relevant, without making it impossible to convey such information, are useful, because it is not my goal, or my responsibility, to communicate those things. Politeness norms can thus help the speaker by ensuring they don't accidentally (and unnecessarily, and unambiguously) convey information they didn't mean to, which doesn't pertain to the matter at hand, and which the other party has no right to obtain. And they can help the listener by enabling them to ignore ambiguous information that is none of their business. In the
3astridain9mo
Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone.  Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don't have emotions, or at any rate they don't have emotions about communication methods. Humans, on the other hand, often do; and it's often not practical to try and train ourselves out of them completely; and even if it were, I don't think it's ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn't be trying to smooth them out, even if they're theoretically changeable to something more convenient. That way lies wireheading.

But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone

I think this could be considered to be a sort of "residue" of the sort of deception Zack is talking about. If you imagine agents with different levels of social savviness, the savviest ones might adopt a deceptively polite phrasing, until the less savvy ones catch on, and so on down the line until everybody can interpret the signal correctly. But now the signaling equilibrium has shifted, so all communication uses the polite phrasing even though no one is fooled. I think this is probably the #2 source of deceptive politeness, with #1 being management of people's immediate emotional reactions, and #3 ongoing deceptiveness.

0qvalq9mo
Pretend-obfuscation prevents common knowledge.
6gjm9mo
I think "I think she's a little out of your league"[1] doesn't convey the same information as "you're ugly" would, because (1) it's relative and the possibly-ugly person might interpret it as "she's gorgeous" and (2) it's (in typical use, I think) broader than just physical appearance so it might be commenting on the two people's wittiness or something, not just on their appearance. [1] Parent actually says "you're a little out of her league" but I assume that's just a slip. It's not obvious to me how important this is to the difference in graciousness, but it feels to me as if saying that would be ruder if it did actually allow the person it was said to to infer "you're ugly" rather than merely "in some unspecified way(s) that may well have something to do with attractiveness, I rate her more highly than you". So in this case, at least, I think actual-obfuscation as well as pretend-obfuscation is involved.
1astridain9mo
That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I'm sure examples can be supplied where "the polite thing to say" is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say. And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that's just a slip” not because you have assigned noteworthy probability-mass to the hypothesis “astridain had a secretly brilliant reason for saying the opposite of what you'd expect and I just haven't figured it out”, but because it's nicer to fictitiously pretend to care about that possibility than to bluntly say “you made an error”. It reduces the extent to which I feel stupid in the moment; and it conveys a general outlook of your continuing to treat me as a worthy conversation partner; and that's how I understand the note. I don't come away with a false belief that you were genuinely worried about the possibility that there was a brilliant reason I'd reversed the pronouns and you couldn't see it. You didn't expect me to, and you didn't expect anyone to. It's just a graceful way of correcting someone.
-1qvalq9mo
"Your loved one has passed on" I'm not sure I've ever used a euphemism (I don't know what a euphemism is). When should I?
2RamblinDash9mo
  We do this so that the ugly guy can get the message without creating Common Knowledge of his ugliness.
-1Said Achmiz9mo
Amount of information conveyed to whom? More pleasant for whom? Obfuscation from whom? Without these things, your account is underspecified. And if you specify these things, you may find that your claim is radically altered thereby.

While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]

For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.

But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.

And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to the... (read more)

4philh7mo
On the narrow question of Feynman's social graces, I only remember watching one video of his and it did seem to back up the "he kinda lacks them" idea. From memory: an interviewer asks him "why is ice slippery" and he starts musing about "how do I explain this to you". The interviewer seems to get kind of a dismissive vibe (which I got too) and says "I think it's a fair question", and Feynman says "of course it's a fair question, it's an excellent question". And now not from memory, here's the video. The question is actually about magnets, he starts pushing for more detail about "what are you actually asking" and that's when you get that exchange. I think the vibe I get is actually more aggressive than dismissive, like at times it seems he's angry at me. I assume it's just enthusiasm, but I feel like I'd find it uncomfortable to have a long conversation with him in that mode. That would be a shame, and hopefully I'd get used to it. (Of course, "having / not having social graces" is way oversimplified. "Feynman was skilled in some social graces and unskilled in others" seems likely. And for all I know, maybe most people don't pick up an aggressive vibe from the video.) But, also relevant: he does talk about ice, and this HN comment says his explanation is wrong. But he actually hedges that explanation. "It is in the case of ice that when you stand on it, they say, momentarily the pressure melts the ice a little bit."
[-]philh9mo150

Note that the Feynman anecdote contrasts Feynman's bluntness against everyone else's "too scared to speak up". There's no one in the story who says "I don't think that will work" instead of "that won't work", or "that seems like a bad idea" instead of "that's a damn fool idea". You assert afterwards that such a person would have been distracted from the thing Bohr wanted, but the anecdote doesn't particularly support or discredit that idea.

2Zack_M_Davis9mo
You know, that's a good point!

Strong downvote. The post looks loosy. The relation between social grace, honesty and truth seeking is complicated and multidimentional. You didn't engaged with this complexity. You didn't properly argued your point. You made a statement then vaguely gestured in the direction of two examples. 

The first example is not only fictional, but isn't even really relevant. The world without lies is in a way nicer to live in because people reveal more information to you. It doesn't make you a supperior truth seeker. Now, would I prefer to live in such world? Sure, me and every other autistic person. But this is axiological issue not epistemological one.

The second example is more on point. It shows that it is epistemically useful to be able to talk to someone ignoring status concerns, especially when people need it. This is the point I completely agree with. However it doesn't generalises to "It's always epistemically better to lack any social grace". Because 1) the same tool isn't the best for every job 2) social grace isn't just about status concerns.   

There is a potential interisting conversation with lots of nuance to be had here which a supperior version of this post woul... (read more)

[-]Ben9mo1511

This is kind of an aside, but does this Feynman story strike anyone else as off? Its kind of too perfect. Not even subtly. It strikes me as "significantly exaggerated", at the very least.

7interstice9mo
While I was reading I was thinking that Bohr might have contacted Feynman moreso because he was more competent than others rather than more honest, but (ironically) it would be rude for Feynman to say that. It's also the case that being competent means you can be blunt without making a fool of yourself, so it's sort of a costly signal.
2Zack_M_Davis9mo
We'll never know! Niels and Aage Bohr are both dead and can't offer a contradictory account. There does seem to be a tension between "all I could see of him was from between people's heads" and Bohr particularly noticing Feynman as unmoved by status. (Unless the noticeable thing was Feynman not particularly trying to be seen?)

My thinking on this point is that the only proper way to respect a great work is to treat it with the same fire that went into making it. Grovelling at Niels Bohr's feet is not as respectful as contending with his ideas and taking them seriously — and expending great mental effort on an intense, focused interlocution is an act of profound respect.

There's a difference between that and discourtesy like what is displayed in the movie scene. Extending courtesy to a kind and virtuous person is a simple matter of justice. Comparing his face to a frog is indelicate, whereas admitting plainly that you find him unattractive is equally as honest without being as hurtful. If he wants a more specific inventory of his physical flaws, he can ask for elaboration.

[-]philh9mo121

Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? ...

While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics

Here's something I wrote earlier today: "I thought transactions wouldn't cause "wait for lock" unless requested explicitly, and I don't think we request it explicitly. But maybe I'm wrong there?"

I don't fully remember my epistemic state at the time, but I think I was pretty confident on both counts. But as it happens, I was wrong on the first count. This is the crucial piece of information we needed to understand what we were investigating.

I can imagine that I might instead have written "transactions won't cause "wait for lock" unless requested expl... (read more)

2Zack_M_Davis9mo
Thanks for commenting! I agree that it's good to communicate one's uncertainty when one is uncertain. (From a certain perspective, it's unfortunate that our brains and culture aren't set up to do this in a particularly nuanced way; we only know how to say "X" and "I think X" rather than sharing likelihood ratios.) Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?
4philh9mo
Nod, but then perhaps that part isn't saying "lack-of-grace is a virtue" so much as "a certain kind of criticism of lack-of-grace is a vice"? (I haven't reread with this possibility in mind.) In any case, I think I'm fine with that kind of tone policing being used for social status regulation when the confidence is unjustified. I suppose you can say "if someone routinely talks with unjustified confidence, then eventually they'll be wrong, and they can take the status hit then?" But 1. I think we can update faster than that. E.g. I recall Scott Adams said Trump would win 2016 with 99% probability or something? Trump did win, but I'm still comfortable judging this as overconfident without looking at his forecasting track record. (Though if someone were to look at his track record and found that he was well calibrated, I guess I'd have to be less comfortable.) 2. Often we never really learn the answer, e.g. with counterfactuals ("if the weather had been 3° colder that day, Hillary would have won") or claims about what's inside someone's head ("they claim to sincerely believe X, but they obviously are just saying that to avoid censure"). "Is this confidence justified" is another example here.

The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes, because sometimes social grace calls for obfuscating shared maps.

Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.

Accuracy of shared maps is quantitative. A culture that's optimized for social grace isn't going to make people wrong about everything, and could make people less wrong about many things relative to many less graceful alternative cultures. (At minimum, if you're not allowed be confident, you can't be overconfident; if you're not allowed to talk about what's inside someone's head, you can't be wrong about what's inside someone's head.)

[-]philh8mo120

Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.

This sounds like it's contrasting "criticism for being unjustified" against "criticism for social status regulation". But those aren't the same use of the word "for", much like it would be weird to contrast "locking someone up for murder" against "locking someone up for deterrence". (Though "for deterrence" might be a different "for" again, I'm not sure.)

To unpack, when I said

I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.

I didn't intend to support someone being like "I want to do some social status regulation and I'm going to do it by tone policing some unjustified confidence". I meant to support "this is unjustified confidence, I want less of this and to that end I'm going to do some social status regulation through the mechanism of tone policing". I can't tell if you're yay-that or boo-... (read more)

8Ninety-Three9mo
Scott Adams predicted Trump would win in a landslide. He wasn't just overconfident, he was wrong! The fact that he's not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google 'Scott Adams Trump prediciton' in Incognito, the first two results say "landslide" in the first ten seconds and title, respectively). Your first case is an example of something much worse than not updating fast enough.
5philh9mo
Thanks for the correction! Bad example on my part then. My guess is that the point is clear and fairly undisputed, and coming up with an actually correct example wouldn't be very helpful. Still a little embarrassing.

"Be like Feynman" is great advice for 0.01% of the population, and horrible for 99% (and irrelevant to the remainder).  In order to be valued for bluntness, one must be correct insanely often.   Otherwise, you have to share evidence rather than conclusions, and couching it in more pleasant terms makes it much more tolerable (again, for most but not all).

I do want to react to:

There, you don't have to worry whether people don't like you and are planning to harm your interests

Wait, that's if THEY CANNOT lie, not if you choose not to.  Unilatera... (read more)

2Said Achmiz9mo
Bluntness has nothing whatever to do with not sharing evidence, so this seems like a total red herring to me.

To a decision-theoretic agent, the value of information is always nonnegative

This seems false. If I selectively give you information in an adversarial manner, and you don't know that I'm picking the information to harm you, I think it's very clear that the value of the information you gain can be strongly negative. 

A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.

And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves. 

This is not irrational behavior, given human goals.

9Viliam9mo
The problem is the deception, not the social grace. If we succeeded to remove social grace entirely, but people remained deceptive, we wouldn't get closer to truth. We would only make our interactions less pleasant.
5Herb Ingram9mo
That seems like a highly dubious explanation to me. I guess, the woman's honest account (or what you'd get by examining her state of mind) would say that she does it as a matter of habit, aiming to be nice and conform to social conventions. If that's true, the question becomes where the convention comes from and what maintains it despite the naively plausible benefits one might hope to gain by breaking it. I don't claim to understand this (that would hint at understanding a lot of human culture at a basic level). However, I strongly suspect the origins of such behavior (and what maintains it) to be social. I.e., a good explanation of why the woman has come to act this way involves more than two people. That might involve some sort of strategic deception, but consider that most people in fact want to be lied to in such situations. An explanation must go a lot deeper than that kind of strategic deception.
[-]TAG9mo4-1

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.

If you only remove lying, you end up with a world that contains a lot more of the negative consequences socially sanctioned lying is intended to avoid -- hurt feelings and so on.

To a decision-theoretic agent, the value of information is always nonnegative.

A boundary around one's mind enforced by a norm of not mind-reading people seems useful. When working on a problem, thoughts on that problem are appropriate to reveal, and counterproductive to drown in social graces, but that says little about value of communicating everything that's feasible to communicate.