Yes Requires the Possibility of No

1. A group wants to try an activity that really requires a lot of group buy in. The activity will not work as well if there is doubt that everyone really wants to do it. They establish common knowledge of the need for buy in. They then have a group conversation in which several people make comments about how great the activity is and how much they want to do it. Everyone wants to do the activity, but is aware that if they did not want to do the activity, it would be awkward to admit. They do the activity. It goes poorly.

2. Alice strongly wants to believe A. She searches for evidence of A. She implements a biased search, ignoring evidence against A. She finds justifications for her conclusion. She can then point to the justifications, and tell herself that A is true. However, there is always this nagging thought in the back of her mind that maybe A is false. She never fully believes A as strongly as she would have believed it if she just implemented an an unbiased search, and found out that A was, in fact, true.

3. Bob wants Charlie to do a task for him. Bob phrases the request in a way that makes Charlie afraid to refuse. Charlie agrees to do the task. Charlie would have been happy to do the task otherwise, but now Charlie does the task while feeling resentful towards Bob for violating his consent.

4. Derek has an accomplishment. Others often talk about how great the accomplishment is. Derek has imposter syndrome and is unable to fully believe that the accomplishment is good. Part of this is due to a desire to appear humble, but part of it stems from Derek's lack of self trust. Derek can see lots of pressures to believe that the accomplishment is good. Derek does not understand exactly how he thinks, and so is concerned that there might be a significant bias that could cause him to falsely conclude that the accomplishment is better than it is. Because of this he does not fully trust his inside view which says the accomplishment is good.

5. Eve is has an aversion to doing B. She wants to eliminate this aversion. She tries to do an internal double crux with herself. She identifies a rational part of herself who can obviously see that it is good to do B. She identifies another part of herself that is afraid of B. The rational part thinks the other part is stupid and can't imagine being convinced that B is bad. The IDC fails, and Eve continues to have an aversion to B and internal conflict.

6. Frank's job or relationship is largely dependent to his belief in C. Frank really wants to have true beliefs, and so tries to figure out what is true. He mostly concludes that C is true, but has lingering doubts. He is unsure if he would have been able to conclude C is false under all the external pressure.

7. George gets a lot of social benefits out of believing D. He believes D with probability 80%, and this is enough for the social benefits. He considers searching for evidence of D. He thinks searching for evidence will likely increase the probability to 90%, but it has a small probability of decreasing the probability to 10%. He values the social benefit quite a bit, and chooses not to search for evidence because he is afraid of the risk.

8. Harry sees lots of studies that conclude E. However, Harry also believes there is a systematic bias that makes studies that conclude E more likely to be published, accepted, and shared. Harry doubts E.

9. A bayesian wants to increase his probability of proposition F, and is afraid of decreasing the probability. Every time he tries to find a way to increase his probability, he runs into an immovable wall called the conservation of expected evidence. In order to increase his probability of F, he must risk decreasing it.

New Comment
56 comments, sorted by Click to highlight new comments since:

“Do I look fat in this outfit?”

(Similarly, asking your mother / husband / etc. for an honest evaluation of your cooking.)

I thought that the point of the clarification of an "honest" answer is that you are actually willing to take a "no" for an answer. A non-qualified opinion even if at surface level being an evaluation propably truly isn't an evaluation. It might be interesting that even if you ask for an "honest" answer people might refuse to give one.

It might be interesting that even if you ask for an “honest” answer people might refuse to give one.

Well, of course this is exactly what happens. (Because many people will claim to want an “honest” answer, but nevertheless socially punish you if you give an answer other than the “correct” one… this is extremely unfortunate behavior, but such are people.)

I know it's the typical outcome, but I don't know why it would be inevitable or obvious. A person that verbally asks for an "honest" answer but punishes is not in fact asking for a honest answer. Part of the reason why people add the qualifier is the belief that those kinds "give you more positive affect".

If you try to shoot for an actual honest opinion you have to care to differentiate between asking for "dishonestly honest" opinions. For the kind of mindset that has "whatever can be destroyed by the truth should be destroyed" actually honest opinions are what to shoot for. But I have bad models on what attracts people to "dishonestly honest" opinions. I suspect that that mindset could benefit from different framing ("I have your back" vs "yes" ie forgo claims on state of the world in favour of explicit social moves).

This lesswrong post might make someone seek out more "dishonest positivity" by applying a "rejection danger" in pursuit of "belief strengthening". I feel that there is an argument to be made that when rejection danger realises you should just eat it in the face without resisting and the failure mode prominently features resisting the rejection. And on the balance if you can't withstand a no then you will not have earned the yes and should not be asking the question in the first place.

That is on the epistemic side there is a "conservation of expected evidence" but on the social side there is a "adherence to recieved choice", you can't give control of an area of life conditional on how that control would be used, if you censor someone you are not infact giving them a choice.

If people have a reason to lie, they may want to use intensifiers like "honestly" for the same reason. Likewise for asking others to lie while pretending to ask for the honest truth - if you're already pretending, why should we start being surprised only once you use words like "honestly"?

There's an underlying question of why this particular pretense, of course.

I feel that there is an argument to be made that when rejection danger realises you should just eat it in the face without resisting and the failure mode prominently features resisting the rejection. And on the balance if you can't withstand a no then you will not have earned the yes and should not be asking the question in the first place.

10. Jill decides to face any "yes requires the possibility of no" situation by (ahem) eating it in the face. She is frequently happy with this decision, because it forces her to face the truth in situations where she otherwise wouldn't, which makes her feel brave, and gives her more accurate information. However, she finds herself unsure whether she really wants to face the music every single time -- not because she has any concrete reasons to doubt the quality of the policy, but because she isn't sure she would be able to admit to herself if she did. Seeing the problem, she eventually stops forcing herself.

I know it’s the typical outcome, but I don’t know why it would be inevitable or obvious.

Well, it’s “inevitable” and/or “obvious” only in the sense that it is quite commonplace human behavior. Certainly it’s not universal—thankfully!

A person that verbally asks for an “honest” answer but punishes is not in fact asking for a honest answer.

Indeed! They are not, in fact, looking for an honest answer. This is a thing people do, quite often.

But why would they say that they want an honest answer? Well, there are many reasons, ranging from self-deception to dominance/power games to various not-quite-so-disreputable reasons. Enumerating and analyzing all such things would be beyond the scope of this comment thread. The rest of your comment is… not so much mistaken, per se, as it is insufficiently built upon an understanding of the dynamics I have alluded to, their causes, their consequences, etc. There is a great deal of material, on Less Wrong and elsewhere, that discusses this sort of thing. (I do not have links handy, I’m afraid, nor time at the moment to locate them, but perhaps someone else can point you in the right direction.)

As might be typical of my neurotype when I see text such as " an honest evaluation " as in the top-level comment I resolve it to mean the uncommon case when a person actually effectively seeks a honest opinion as the plain english would suggest. The type of reading that interprets it as the common case could easily suggest that honest asking would be impossible or a irrelevant alternative. And indeed people are trained enough that even when asked for a "honest" opinion they will give the expected opinion. I didn't really get the simulcranum levels and such but in such dynamics people have lost the meaning of honesty.

Making a token commitment to wanting honesty doesn't prevent a person from feeling negative emotions when they hear the wrong answer. If you are dealing with a person who doesn't act based on their intellectual commitments but based on what they feel, someone saying that they want an honest answer doesn't go very far.

The intellectual commitment is so easy it is hardly the centerpiece of the issue. When you are asking for such an answer you are consenting to potentially feeling bad ie you are making a kind of emotional commitment. Sure receiving a bad answer sucks but whether you punish it's expressor is part whether they feel safe in expressing their opinion. For example if you were to participate in boxing acting offended when your face hurts would be unreasonable unless it's outside of the consent given by for example occuring outside of rounds.

Sure for some it might appear as emotional masoschism to allow others to hurt you while removing your possibility for retaliation. BUt setting it up is an emotional transaction and not a intellectual one.

It seems to me like you are assuming that people have a lot more emotional self control then they have in reality.

It's not possible for 99% of the people to simply remove their ability to retaliate.

I am not claiming it's simple. But there is a distinct differrence between trying to make it work and just not giving a single thought for it.

In the boxing metaphor people do not react calmly to be beaten up but the agression is supposed to be channeled within rounds. If someone does punch outside of rounds people know what is happening and for example do not fear additional violence. But someone who consistently does that kind of outside-context punching can not really participate in the sport and would be liable to be charged with assault for those sorts of acts. It's plausible that someone has so strong reflexes to being punched that they can not pass the emotional competence standard to participate. If they see all punches as personal attacks and not part of the sport it can be a disqualifying factor. And it's plausible that more veteran boxers contextualise getting hit differently, ie having the right kind of attitude can be built up with practise.

It's easy to be self-aware about the fact that you box other people. It's easy to avoid boxing people.

On the other hand, there are a lot of fine decisions in social interactions about how to interact with other people. Without a lot of training people are not self-aware of how their emotions play into all the aspects of how they treat other people.

I did end up thinking about whether it's always easy to be self-aware about the fact that you box other people.

You walk on the street and a passer by seem agitated and says "come at me bro". Thinking it as play or in general just wanting to went rage from your daily stresses you approach and punch the person. They feel sore about the situation and later decide to chare you with assault. You plead that the situation was mutual understanding informal boxing match. It's not a super strong defence but it's not to my mind automaticallly failing one. In judging such a case there might be multiple interpretation questions. Is the idiom regularly known enough that it establishes a code of conduct? Is it reasonable to hear as a non-idiom. If heard as a non-idiom, does "bro" indicate playfulness? If a conduct, does it amount to consenting to be punched? Or does the conduct invite someone to be a first aggressor, merely amounting to guarantee a retaliation and that the sayer won't first strike? Even if at the time of events the participants don't think in the terms of a boxing match it's different from an unannouced strike out of the blue.

In your example it's quite clear that boxing happened. Whether or not it was justified play or wasn't is another question.

As a person that does not find social situations intuitive I know social situations are not easy. In my experience my ignorance or disregard for social conventions has not really given me the benefit of a doubt. I feel that people that can intuitively mesh well with the social fabric without explicit modelling of it can have it kind of easy. When I painstakingly recreate behaviour which from my point of view is convoluted and arbitrary what the standard of acceptable behaviour is of great interest. Part of what makes that struggle easier that it's not that its the deck stacked against me but a standard that everyone has to adhere to. If the rules are slightly inconvenient for somebody else I dont' feel so bad in rules-laywering the conventions against them.

Sure you can't expect perfect introspection from anyone but it's not like total lack of introspection is allowed. There can be a duty to be informed. Say you are on the road and encounter a roadsign that you don't recognise (and thus can't obey). If you are driving abroad and it's a type of sign that has no correlate in your homecountry and is a rare type of sign anyway, no big fault. But if it is a sign that is highly standardised across borders with a simple design that is a common occurence (like a stopsign) it's a bigger issue. If you start driving a car on public roads without knowing streetsigns that would be reckless. And yet I don't expect there to be many drivers that recognise all signs and that many of them are not reckless for driving.

There can be situations where the local driving culture can develop to be inconsistent with formal traffic laws. A responcible driver needs to take into account that this is a viable alternative how people might act. But it doesn't mean that is how you should drive or really as an excuse for why they are driving the way thay are driving or that you need to be ambivalent about whether they do it or not. It's perfectly fine to hold that they are wrong and that they shouldn't be doing it. Recognising what the norm is doesn't mean you need to normatively endorse others to uphold it.

As a person that does not find social situations intuitive I know social situations are not easy. In my experience my ignorance or disregard for social conventions has not really given me the benefit of a doubt.

That sounds to me like you often don't recognize when people get punished for saying inconvenient truths as it doesn't happen in explicit ways.

Yes that does mean I probably register more honest questions that go without hiccups when actually there are honest questions with minor hiccups. I do not find it terribly relevant to my findings or positions reliability. They would be assholes for forming a grudge in the situations and carrying an indirect revenge whether I detect it or not. "Nobody asked for your opinion" is a valid complain for unprompted negative bashing. But in this situation "No, you did ask my opinion" would be a relevant defence. Even if they did not mean to ask, since people are not mindreaders, I might be in a position where to my effective reality I was asked.

It might be relevant that my national culture might favour directness and frankness more than the median culture does. This makes it more probable and expected that someone would genuinely ask for an honest opinion and it not being an fringe edge case. Banks usually clarify to their customers that they do not ever ask for credentials via email which helps customers more confidently identify scam emails. I could see some close relationships that would establish something to the effect of "I want you to always be unwaveringly on my side. If I ever say something its never election for opinion". Under this kind of understanding/assumption you would find every excuse to not find even explicit asking for opinions to actually be elections for opinion. But I could also see someone wanting to establish something to the effect of "If we had some issues you could tell me, right?". So under some relationships you would genuinely ask for opinion.

They would be assholes for forming a grudge in the situations and carrying an indirect revenge whether I detect it or not.

"Revenge" is a phrase that implies intent.

Let's say someone invites you for a dinner party. Then you comment on how their food could be improved when they ask for a honest opinion of their cooking.

Some time later the person thinks about throwing a dinner party and about who to invite. They are going to do this likely by thinking of possible invities and seeing what kind of emotional reaction they feel about whether it would be good to invite the person.

The act of having received your feedback does affect the emotional reaction even when the person doesn't remember the incident. There's likely some emotional tension and that makes it less likely that they invite you without them having any ill-will they reasoned about.

This can even happen when the person judges the feedback as welcome on a system II level.


Well described important effect.

I do point out that the emotional experience can also be positive which would increase my invitation chances.

I don't know what would be good terminology but I think system 1 reactions are not above critisms. For example if someone feels genuine disgust towards ethnicities that are not their own but verbally and formally is commited to equal treatment of all people I would still be tempted use words like "concerning" or "ugly".

Likewise in the use of deadly force for self-defence if you get easily frightened to "life in danger" levels it means more violence is permissible in more varied situations. If you get differentially more afraid towards certain groups of people it can weaken their right to life. I can undertstand how this could feel very unfair and in some non-staighforward way it is not fair. But on the other hand I would not want for person in fear of their life need to hesitate for fear of punishment. But I think there is such a thing as "fearing irresponcibly". An arachnophobe going into a house full of spiders and he ends up burning the house because he was killing spiders with fire in panic I would not classify as a total accident.

What we are mainly discussing here is not that extreme but I still think it's not automatically wrong to hold someone responcible for their feelings, althought it does need special care and in a signifcantly modified sense.

For example if I would feel negatively for not getting invited that would not be unproportionate responce.

Whether the emotional experience is positive depends on the ability of the person you are dealing with do deal with feedback and your own ability to give good feedback.

It doesn't depend on whether the other person wants to get feedback on the system II level.

I think you are likely underrating how much unproductive social behavior most people engage in because of emotional reasons. Likely, including yourself.

Anecdotally, I've had friends who explicitly asked for an honest answer to these kinds of questions, and if given a positive answer would tell me "but you'd tell me if it was negative, right?"... and still, when given a negative answer, would absolutely take it as a personal attack and get angry.

Obviously those friendships were hard to maintain.

Often when people say they want an honest answer, what they mean is "I want you to say something positive and also mean it", they're not asking for something actionable.

The only way to get information from a query is to be willing to (actually) accept different answers. Otherwise, conservation of expected evidence kicks in. This is the best encapsulation of this point, by far, that I know about, in terms of helping me/others quickly/deeply grok it. Seems essential.

Reading this again, the thing I notice most is that I generally think of this point as being mostly about situations like the third one, but most of the post's examples are instead about internal epistemic situations, where someone can't confidently conclude or believe some X because they realize something is blocking a potential belief in (not X), which means they can't gather meaningful evidence.

Which is the same point at core - Bob can't know Charlie consents because he doesn't let Charlie refuse. Yet it feels like a distinct takeaway in the Five Words sense - evidence must run both ways vs. consent requires easy refusal, or something. And the first lesson is the one emphasized here, because 1->2 but not 2->1. And I do think I got the intended point for real. Yet I can see exactly why the attention/emphasis got hijacked in hindsight when remembering the post. 

Also wondering about the relationship between this and Choices are Bad. Not sure what is there but I do sense something is there. 

I want to write a longer version of this in the future, but I'm going to take a while to write a comment to anchor it in my mind while it's fresh.

Many social decisions are about forming mutually advantageous teams. Bob and Charlie want to team up because they both predict that this will give them some selfish benefit. If this is a decision at all, it's because there's some cost that must be weighed against some benefit. For example, if Bob is running a small business, and considers hiring Charlie, there'll be various costs and risks for onboarding Charlie, or even for offering to interview him.

Part of evaluating a teammate is getting a sense of their effectiveness as social operators. Can they connect in conversation? Can they demonstrate workplace professionalism? Can they read people's character?

One aspect of effectiveness is the ability to apply pressure in order to get some result out of other people. If Charlie is being hired for a sales job, he needs to be able to apply some pressure to customers in order to make a sale.

One way that Bob can evaluate Charlie as an applicant is by seeing if Charlie can pressure Bob into hiring Charlie as a salesman. Bob wants to see if Charlie is capable of selling himself. He wants Charlie to pressure him, to remove (some of) the possibility of no. If Charlie can't do that, then Bob doesn't want him as a salesman.

It may be that this ability to apply social pressure is also relevant in other areas of life, such as mate selection, promoting scientific findings, or advancing beneficial policies and social norms. We choose partners or leaders based on their ability to pressure others, and us, into accepting their desires, because we are using this as a test of their effectiveness in dealing with the rest of the world. We predict that a partner or leader who can apply pressure on us will also be able to use that ability to advance our mutual agenda and bring rewards to us in the future.

In these situations, then, "yes" requires the impossibility of no.

Or at least the diminishment of that possibility below some threshold.

In the case of Bob and Charlie, this isn't entirely because Bob has been forced, despite his resistance, to acquiesce to hiring Charlie. It's also because Bob would have freely chosen not to hire Charlie if Charlie wasn't capable of overwhelming his autonomy. Bob might have even been able to sit back comfortably and tell you so in advance of his interview with Charlie.

There may be an interesting interaction when these pressure tactics are applied to advancing an agenda of greater freedom and less pressure. Perhaps we can use pressure to specifically punish pressure tactics, in order to safeguard a low-pressure space. This has parallels to the Leviathan concept, in which the government monopolizes the use of force in order to prevent its citizens from having to defend themselves. It uses high-violence tactics to achieve a low-violence environment. We also see it in some current attitudes about sexual consent.

That said, though, even in an enforced low-pressure environment, people will still face the challenge of getting the goods some other way, either for themselves or for their team. They may find alternative ways to exert social pressure. Or they may find some other means of leverage that was previously either second-rate or actually inaccessible due to the former high-pressure climate. In a noisy room, having the loudest voice might give you an edge as a conversationalist. If the room has been quieted, other attributes might make you more competitive.

Once we identify whatever new tactics give us a manipulative edge in the low-pressure environment, we may then start to require that candidates for collaboration or leadership demonstrate their ability to manipulate us with these tactics.

The key to balancing the need for consent and commitment seems to be creating defined spaces in which we alternate between these modes.

Consider the hiring situation again. In particular, imagine that Bob and Charlie have roughly equivalent value to offer to each other. Bob's offering Charlie a lucrative commission rate, but Charlie's able to generate a lot of profitable sales for Bob.

Bob and Charlie both want to have some confidence that they're not about to enter into a temporary, exploitative arrangement for the sake of convenience until some marginally better alternative is found. So they need a sense that each of them has the freedom to choose alternative options.

On the other hand, Bob wants Charlie to demonstrate that he's capable of using high-pressure sales tactics to convince Bob to hire him. So the "space for freedom" needs to be followed by a "space for commitment." At this point, Charlie should be convinced that he wants to work for Bob, and Bob's last remaining reservation about hiring Charlie should be his ability to pressure Bob into hiring him. If Charlie can do it, then the end result will be something that Bob wanted during the time when they were both still in the "space for freedom."

This requires some finesse. To execute this strategy successfully, they have to sequence their mutual evaluation in the proper order, and Bob has to be prepared to give up his freedom from Charlie's pressure tactics for a certain period of time.

In other scenarios, such as romance, it might be that the people involved are not capable of such careful coordination. Failure to properly sequence the "space for freedom" and the "space for commitment" might lead to problematic pressure applied when there were other reservations to sort through with a clear mind. Failure to apply pressure at the right time might result in a confusing or frustrating letdown.

What's more, the whole notion here is that the ability to apply pressure is a demonstration of one's capacity to operate effectively in the world, and that potential teammates evaluate this by seeing if their potential partner can do it to them. Other skills, like the ability to send and receive social signals, are also relevant here.

So we could see a situation where, optimally, potential partners are evaluating each others' ability to sequence and transition between a "space for freedom" and "space for commitment," but in which they are evaluating signaling skills. Bob may want to evaluate whether Charlie can implicitly detect when Bob is expecting him to begin applying pressure.

If there are serious outcomes at stake, then the combination of reliance on implicit signals and application of pressure could lead to wasted time, frustration, or harm. If both people are bad at figuring out that this is what's going on, then they might just experience themselves as imcompetent.

A natural-seeming correction to these failed negotiations is to make the implicit explicit and to remove the pressure. But this also removes precisely the qualities that the negotiation was meant to evaluate. Hence, I expect that such interventions either lead to the re-introduction of signaling and pressure by other means,  or to people ignoring or avoiding the intervention.

It might be more useful to educate people about what's actually going on during such negotiations, and help them gain skill in reading subtle signals and understanding the uses and misuses of pressure. That's tricky, since there's probably little explicit understanding of those considerations, and it'll pattern match with what's commonly considered to be bad behavior. This is all a wild theory off the top of my head, but I'm interested to think about it more.

This is great! One subtle advantage of the list-of-koans format is that it provides a natural prompt for the reader to think up their own as an exercise.

  1. Irene wants to believe the claim "G is an H." H is a fuzzy category and the word "H" can be used in many ways depending on context. Irene finds she can make the sentence "G is an H" more probable by deliberately choosing a sufficiently broad definition of "H", but only at the cost of making the word less useful.

  2. Jessica posts something to a social-media platform that has "Like"s, but not downvotes. She doesn't know whether no one saw her post, or if lots of people saw it but they all hated it.

(I moved the meta thread that used to be below this comment to its own post here)

10 is vague, and lacks examples. (Is it the Sorites paradox?)

11 is great. (Though it does raise the question - if you can only see upvotes minus downvotes, how do you know whether a score of 1 indicates no one cared, or everyone cared and were split both ways?)

10 is vague, and lacks examples.

That's fair. For a more concrete example, see the immortal Scott Alexander's recent post "Against Lie Inflation" (itself a reply to discussion with Jessica Taylor on her Less Wrong post "The AI Timelines Scam"). Alexander argues:

The word "lie" is useful because some statements are lies and others aren't. [...] The rebranding of lying is basically a parasitic process, exploiting the trust we have in a functioning piece of language until it's lost all meaning[.]

I read Alexander as making essentially the same point as "10." in the grandparent, with G = "honest reports of unconsciously biased beliefs (about AI timelines)" and H = "lying".

Note that it's a central example if you're doing agent-based modeling, as Michael points out.

In reading this it has occurred to me that a good response to someone asking for an "honest" answer might be to ask in return "Why do you want an honest answer?". The response might provide a guide as to whether or not the questioner really does want an honest answer, and whether or not it is a good idea to supply one.

Just noticed a hell of an example in my own life. (pardon the pun)

11. Accepting salvation requires the possibility of damnation. Heaven requires the possibility of Hell.

Less metaphorical: At the beginning of university I joined the camp of "The path to all good and righteous things in life will come from Deep Work and mastery over skills." Today when examining why I was so scared of not achieving huge quality work output in life, I realized it's because I implicitly bought into the Deep Work hell.

The above isn't to say that I think Cal Newport is pitching an idea of hell. My point is that in the process of believing in something that had the emotional valence of "salvation", I couldn't help myself from also believing in something that had the emotional valence of "hell".


is it? i find it very Christian way of thinking, and this though pattern seem obviously wrong to me. it's incorporated into the Western Culture, but i live in non-Christian place. you can believe in Heaven to all! some new-age people believe in that. you can believe in Heaven to all expect the especially blameworthy - this is how i understand Mormonism.

thanks for the insight! now i can recognize one pretty toxic thought-pattern as Christian influence, and understand it better! 

This falls under the category of "things that is good to have a marker to point at and does a much better job than any previous marker." It also made me more aware of this principle, and caused me to get more explicit about ensuring the possibility of no in some interactions even though it was socially slightly awkward.

Promoted to curated: This post makes an important point that I haven't actually seen made somewhere else, but that I myself had to explain on many past occasions, so having a more canonical referent to it is quite useful.

I also quite like the format and generally think that pointing to important concepts using a bunch of examples seems relatively underutilized given how easy those kinds of post tend to be to write, and how useful they tend to be.

Note that in 1 if you want to avoid the "lackluster doing" outcome you have to genuinely be willing to not do / take pessimism effectively in to account when you do the group discussion: It seems to be a very distinct skill which is not very obvious.

In 9 it's kinda weird that a bayesian wants to increase a probability of a proposition. Someone that takes conservation of expected evidence into hearth would know that a too high number would be counterproductive hubris. I guess it could mean "I want to make X happen" vs "I want to believe X will happen". I get how the reasoning works on the belief side but effecting the world side I am unsure the logic even applies.

WRT #9, a Bayesian might want to believe X because they are in a weird decision theory problem where beliefs make things come true. This seems relatively common for humans unless they can hide their reactions well.

The issue of wanting X to happen does seem rather subtle, especially since there isn't a clean division between things you want to know about and things you might want to influence. The solution of this paradox in classical decision theory is that the agent should already know its own plans, so its beliefs already perfectly reflect any influence which it has on X. Of course, this comes from an assumption of logical omniscience. Bounded agents with logical uncertainty can't reason like that.

Yes requiring the possibility of no has been something I've intuitively been aware of in social situations (anywhere where one could claim "you would have said that anyway"). 

This post does a good job of applying more examples and consequences of this (the examples cover a wide range of decisions), and tying to to the mathematical law of conservation of evidence. 

Adding a nomination, this is also a phrase I regularly use.

I see this concept referenced a lot and would be happy if it were spread more widely. The post is also clear and concise.

in Skill: The Map is Not the Territory Eliezer wrote:

Sometimes it still amazes me to contemplate that this proverb [The map is not the territory] was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. 

That's exactly how I've felt when i read this post. it's such a fundamental principle, which I've used so much even before reading the post, that it was amazing to me that this phrase has been uttered only as late as 2019.

I also remember thinking that someone must have already said it somewhere, and not being able to find any other instance online.

It's a principle and a phrase i still use a lot (and probably even more since the post), so i think this post deserves all the nominations it got.

This is one of those posts that crystallizes a thing well enough that you want to remember it and link to it as shorthand. Though I do think most of the top-tier rationalists already understood this concept prior to 2019, and were applying it, without necessarily having a good name or crystallization. This seems like a more accessible, easier-to-connect and slightly more general version of the concept of conservation of expected evidence.

[-]jmh60

I am struggling to understand the goal of the post. We have the 9 scenarios but exactly what is the target?

In some cases I would say "The truth hurts." is the well-know saying that fits and so we have the social grace of white lies.

In others I would say we live in a world of uncertainty and will never really enjoy the luxury of 100% knowing if something is true or not. We end up actually taking things on faith (that's not limited to religions).

Last, we have cases where we have a responsibility to be honest, both with ourselves and others (perhaps case 1 fits here). The goal here then is how to be honest in a constructive way.

I am struggling to understand the goal of the post.

The title was helpful to me in that regard. Each of these examples shows an agent who could run an honest process to get evidence on a question, but which prefers one answer so much that they try to stack the deck in that direction, and thereby loses the hoped-for benefits of that process.

Getting an honest Yes requires running the risk of getting a No instead.

[-]jmh20

Hmmm.

Okay that helps some but I'm still really at a loss as to what that actionable proposal might be. I suspect in most of the cases it's not merely that one needs to accept the answer might be "No." (The strategy there is often "Better to ask forgiveness than permission." ) but more about how to over come the barriers and get that honest response.

Perhaps adding some bits to each case on how to overcome the barrier to the honest answer (Yes or No) would have been helpful if the problem was not getting that honest "No" or how to really accept that "No" outcome should be allowed (not staking the deck).

Not every important concepts has implications which are immediately obvious, and it's generally worth making space for things which are true even when you can't yet find the implications. It's also worth making the post.

That said, one of the biggest implications I draw from this concept is that of "seeking 'no's". If you want a "yes", then often what you can do is go out of your way to make "no" super easy to say, so that the only reason they won't say "yes" is because "yes" isn't actually true/in their best interests. A trivial example might be that if you want someone to help you unload your moving truck, giving them the out "I know you've got other things you need to do, so if you're busy I can just hire some people to help" will make it easier to commit to a "yes" and not feel resentful for being asked favors.

More subtly, if you're interested in "showing someone that they're wrong", often it more effective to drop the goal entirely and instead focus on where you might be wrong. If you can ask things with genuine curiosity and intent to learn, people become much more open to sharing their true objections and then noticing when their views may not add up.

"Seeking 'no's" is a concept that applies everywhere though, and most people don't do it nearly enough.

I think there isn't a consistent change in policy that's best in all the examples, but all the examples show someone who might benefit from recognizing the common dynamic that all the examples illustrate.

I've referred and linked to this post in discussions outside the rationalist community; that's how important the principle is. (Many people understand the idea in the domain of consent, but have never thought about it in the domain of epistemology.)

Recommended.

I would say that Mistakes with Conservation of Expected Evidence is kind of a review of this post.

(Not making this comment a review because I didn't actually write the linked post)

While "yes requires the possibility of no" is correct, one should also establish whether or not either yes or no is meaningful itself in the context of the examination. For example, usually one is not up against a real authority, so whether the view of the other person is in favor or against his/her own the answer cannot be final for reasons other than just the internal conflict of the one who poses (or fears to pose) the question.

Often (in the internet age) we see this issue of bias and fear of asking framed in regards to hybrid matters, both scientific and political. However, one would have to suppose that the paradigmatic anxiety before getting an answer exists only in matters which are more personal. And in personal matters there is usually no clear authority, despite the fact that often there is a clear (when honest) consensus.

An example, from life. A very beautiful girl happens to have a disability - for example paralysis or atrophy of some part of her body. There is clear contrast between her pretty features (face, upper body etc) and the disabled/distorted one. The girl cannot accept this, yet - as is perfectly human - wishes to get some reassurance from others. Others may react in a number of different ways. The answer, however, to any question posed on this, can never be regarded as some final say, and in a way it happens that what is being juxtaposed here is not a question with an answer, but an entire mental life with some nearly nameless input of some other human.

In essence, while yes requires the possibility of no, I think that the most anxiety-causing matters really do not lend themselves well to asking a question in the first place.

So like, sometimes when the answer seems vague it's because there are actually two questions? Like, "am I good at music" can be answered in relation to the entire world or to ones friend group, or specifically focusing on music theory versus performance versus composition versus taste, so there's no meaningful (one word) response; it's always possible to doubt reassurance because one can look at a slightly different question.

At least, that's what I think I get from your penultimate paragraph. I don't understand your first two paragraphs. I think your first paragraph is saying: the opinions of individuals doesn't definitively answer yes or no, because you need an authority. Second paragraph: We only experience bias with personal and not scientific/political questions because we are more emotionally involved with the formal, which also(?) lack an authority to give a definitive answer.

Is that accurate?

I usually interpret this as action. When one is doubting whether one is good enough to get into some school, it doesn't really matter to evaluate goodness because the correct action is still usually to apply/audition, viz. applying/auditioning dominates. And a negative result doesn't justify hating oneself because self-hatred is unproductive, viz. self-neutrality dominates.

4. Derek has an accomplishment. Others often talk about how great the accomplishment is. Derek has imposter syndrome and is unable to fully believe that the accomplishment is good. Part of this is due to a desire to appear humble, but part of it stems from Derek's lack of self trust. Derek can see lots of pressures to believe that the accomplishment is good. Derek does not understand exactly how he thinks, and so is concerned that there might be a significant bias that could cause him to falsely conclude that the accomplishment is better than it is. Because of this he does not fully trust his inside view which says the accomplishment is good.

 

I've been dealing with the problem outlined here (I'm concerned that there might be a bias causing me to conclude A, so I distrust the conclusion A).  Are there any helpful posts on why this is wrong? 

When a teacher is wondering whether to skip explaining concept-X, they should ask "who is familiar with concept-X" and not "who is not familiar with concept-X".

it was strange to read it. it was interesting - explaining point i already know in succinct and effective way. and it's connect nicely with the extensive discussion on consent and boundaries. Boundaries: Your Yes Means Nothing if You Can’t Say No

and then, when i was reading the comments and still internalizing the post i got it - i actually re-invented this concept myself! it could have been so nice not to have to do it... i wrote my own post about it - in Hebrew. it's name translates to Admit that sometimes the answer is "yes", and it start with a story about woman that claimed to believe in personal optimization of diet my experiments on yourself, but then find a reason no invalidate every result that contradicted her own believes about optimal diet. it took me years to notice the pattern.

and then, this comment about budgeting and negotiating with yourself that empathized how important it is to allow the answer to be "yes":

"I’m seeing a lot of people recommend stopping before making small or impulse purchases and asking yourself if you really, really want the thing. That’s not bad advice, but it only works if the answer is allowed to be ‘yes.’ If you start by assuming that you can’t possibly want the thing in your heart of hearts, or that there’s something wrong with you if you do, it’s just another kind of self-shaming. "

it's kind of like 5, but from the point of view of different paradigm.

and of course, If we can’t lie to others, we will lie to ourselves.

it's all related to the same concept. but i find the different angels useful.

And that, kids, is why nobody wants to date or be friends with a rationalist.