Unendorsed while acting/Endorsed reflectively: This is not so strange a failure mode as you are indicating. You take some action, which at the time you know isn't right (execution failure). Later, you come up with some post-hoc justification for your actions. This is another failure mode to be aware of, affecting the postmortem.
Postmortems can have their own failure modes. We might disagree about the facts of what happened. We might have different perspectives on the context in which things happened. We might struggle to be honest with ourselves about our motivations at the time, or our analysis of past events is colored differently by information gained after the events -- maybe giving us better insight, or maybe clouding over the original failure.
I really disagree with some of what seem to be the implicit premises of this post: mainly, that caring for someone includes proactively taking responsibility for their problems.
No, their problems are theirs, and respecting this is the drama-and-conflict-minimizing strategy. There are other, better ways to care for others— but violating their sovereignty is not it.
I think there's another implicit premise of this post ~"external events cause someone's internal subjective experience" (e,g.: Alice's actions can emotionally harm Bob), and I think that's most definitely false. Subjective experience is a matter of interpretation upon sensation. There are better and worse ways to interpret the world, of course, but some are bad. And I claim that the interpretations that cause suffering are bad. Interpretations are flexible (though not consciously! through a different manner).
I've been thinking about something close to this for about 9 months now, and I should be publishing my Big Post in the next month or two which explains these disagreements and provides an alternative model. DM me if you'd like to review the draft.
I really disagree with some of what seem to be the implicit premises of this post: mainly, that caring for someone includes proactively taking responsibility for their problems.
No, their problems are theirs, and respecting this is the drama-and-conflict-minimizing strategy. There are other, better ways to care for others— but violating their sovereignty is not it.
I think this is at least somewhat addressed by this section:
Note that none of the above is about whose fault it is that Choni overslept his interview—it’s his responsibility to make sure he wakes up in time, and you’re not to blame for his failure to do so. From the perspective of The Official Natural Law Code On Roommates’ Rights and Responsibilities, you have in no way violated a stricture or crossed a boundary (if anything, waking Choni is frowned upon by The Code).
But insofar as we (Ricki and Avital) aspire to live cooperatively and provide support for one another and for others, talking over our choices using MCE has been especially fruitful for us.
As I interpret it, the default is "leave me alone" (plus, I guess, anything you've explicitly agreed to, like a rotation of dishwashing duties), and any more intimate involvement with their lives is something you opt into. Which seems all right prima facie. As long as person A doesn't surprise people by unilaterally doing the "more intimate" thing, or unilaterally expecting others to do it.
I agree with
Note that none of the above is about whose fault it is that Choni overslept his interview—it’s his responsibility to make sure he wakes up in time, and you’re not to blame for his failure to do so.
however, see (bolding mine):
You see him sleeping and leave him alone. When he wakes up at 4:30pm, he has missed his 2pm job interview and is annoyed. What went wrong? Here are three plausible stories:
- There was a modeling error: You falsely believed that Choni wanted to nap all afternoon—you didn’t know he had an important interview, or thought he had rescheduled it—so you decided to let him sleep, acting reasonably based on your bad model.
these seem to contradict? "None of the above is about whose fault" vs "What went wrong? […] You […]".
the default is "leave me alone" (plus, I guess, anything you've explicitly agreed to, like a rotation of dishwashing duties)
I agree
, and any more intimate involvement with their lives is something you opt into
I possibly disagree. Qualifications:
I imagine that subscribing to your model might also get tricky and drama-inducing if one roommate says to another "I want you to help me anytime you think you could help me"
This sounds like it is effective, but I will add that model and care problems are often entangled. For me at least, the extent that I care about someone is very sensitive to how much I figure they care for me. That type of reciprocation seems pretty general. So if we wrongly perceive that others don't care, we will likely be less helpful and may distance ourselves from them. Then the victims may perceive the behavior solely as a care problem.
I wonder how often one can really isolate these aspects in practice?
This maps surprisingly cleanly to a complaint I've often semi-jokingly made that most problems in the world (politics, relationships, etc) boil down to malice or incompetence. There's also an implied "luck" term, in both the model/care/execution model and the malice/incompetence model, of course, but it's rarely relevant.
Relevant: Conflict vs. Mistake. Wrong model and failure of execution are mistakes/errors, caring for other things more can be conflict, though given the framing, a missed possibility of a Pareto efficient plan, or a more deontological harm-avoidance stance can also turn it into a mistake. And relevance of particular mistakes can be distorted by a conflict with a utility monster, vulnerability in a pure mistake framing.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
I have a friend Balaam who has a very hard time saying no. If I ask him, “Would it bother you if I eat the last slice of pizza?” he will say “Of course that’s fine!” even if it would be somewhat upsetting or costly to him.
I think this is a reference to Guess/Ask/Tell Culture, so I'm linking that post for anyone interested :)
When Alice harms Bob, it is likely that one of the following three things went wrong:
Often, the problem is a combination of two or even all three of these: if I had taken the time to think a little about what you would want (a care problem), I would not have made the mistake I made (a model problem) or would have made a plan that actually worked (an execution problem).
We have found the framework of Model, Care, and Execution (MCE) to be useful when postmorteming cases of interpersonal harm. What went wrong, and how can we do better next time? More generally, what failure modes are most common for each of us, and what do we need to do to help each other improve in those areas?
Postmortem: Your Roommate Sleeps Through His Interview
It’s 1pm and your roommate Choni has fallen asleep on the couch. You see him sleeping and leave him alone. When he wakes up at 4:30pm, he has missed his 2pm job interview and is annoyed. What went wrong? Here are three plausible stories:
Depending on which of these went wrong, your plan for the future should be different.
If it was a model problem, then Choni saying ahead of time “I have an important interview at 2pm today that I can’t miss” would have helped. You could also establish a default decision together—e.g., any time Choni is asleep in the middle of the day, you should wake him—or communicate other relevant information, like “it’s extremely easy for Choni to fall back asleep when awakened undesirably, so it’s better to err on the side of waking him” that will help you get a better idea of what he would want next time.
If it was due to care prioritization, then Choni has a few options. He might try to change his own behavior so that it’s more rewarding to do what he wants, e.g. by making waking him up more pleasant (and less punchy), or by profusely thanking you afterwards for doing so, or by offering you a cash bounty for every successful waking that helps him meet his commitments. He might ask you to prioritize him higher, and hope that your general care for him or for the roommate dynamic will cause you to shift your priorities. You, too, might wish that you cared about waking Choni up when he wants you to and be glad if talking about it with him ends up causing you to care more. Some of these make you care more about Choni himself, and others just change how much you care about waking him up relative to other things in your life. If none of these seem like they would work, he might mentally reevaluate how much he should rely on you, and make other choices as a result, like putting someone else in charge of waking him, or finding a new roommate who prioritizes him more highly.
If there was an execution failure, then you and Choni can make plans together to improve the likelihood of success, such as Choni telling you what actually works to wake him up (“next time, please dump a bucket of ice water on my head”), or you keep an extra copy of your apartment key under the doormat so you don’t get locked out, or you treat your debilitating depression with therapy, medication, and incremental lifestyle changes sustained over time.
Note that none of the above is about whose fault it is that Choni overslept his interview—it’s his responsibility to make sure he wakes up in time, and you’re not to blame for his failure to do so. From the perspective of The Official Natural Law Code On Roommates’ Rights and Responsibilities, you have in no way violated a stricture or crossed a boundary (if anything, waking Choni is frowned upon by The Code).
But insofar as we (Ricki and Avital) aspire to live cooperatively and provide support for one another and for others, talking over our choices using MCE has been especially fruitful for us. We explicitly name these three categories when thinking through situations like this: ask each other if the inhibiting factor was model, care, or execution; talk about what led to that failure; and generate ideas for preventing it.
Premortem: Surprise Birthday Party Invite List
Daniel’s birthday is coming up, and Esther is planning a surprise party for him. Esther needs to decide whether to invite Leo, a member of their friend group who usually gets an invite to these types of events. But Esther recalls that Daniel has said he finds Leo extremely difficult—borderline predatory—and would undoubtedly enjoy the party less if he were there. What should Esther do?
She can try to anticipate three categories of possible failure modes:
If Esther were trying to also consider what Leo wants, she’d have an even harder problem. She would need to consider model, care, and execution for Leo separately, while balancing her care for him and for Daniel, since it seems like what they each want is incompatible. Usually, we think it’s a good idea to think about all of the people involved who might be harmed, but for simplicity, here Esther is only considering possible harm to Daniel.
Let’s say Esther isn’t sufficiently confident in her models, her care weighting, or her execution abilities. She has a few choices of how to handle her predicament:
Outsourcing: She can outsource the problem to somebody else: ask Daniel’s best friend Shadrach what he would do in this situation. This will probably help! Asking Shadrach is effectively just passing a hot potato to someone with better models, care prioritization, and execution skills, who then might use a similar framework to make decisions, but with higher likelihood of success.
Prioritizing consent: Esther can seek out Daniel’s consent directly—that is, ask him what he wants. It’s a party for him, so in some sense he deserves the right to determine who is there. Asking Daniel requires sacrificing the “surprise” element of the surprise birthday party, which is a real cost. Esther might deem that cost worth incurring relative to the risk of messing up and making the wrong call about whether to invite Leo. (More on consent in the next section.)
Be conservative and avoid the worst-case scenario: Sometimes it’s clear that making a mistake in one direction is a lot more costly than the other. For example, say Leo has a history of violent behavior, and there’s reason for concern about aggression, Esther might lean on the side of not inviting him (and maybe also hiring a security guard).
Own (and forgive) mistakes: Esther might end up making the wrong choice, and usually… that’s okay. It’s especially okay if she and Daniel postmortem the decision after the fact, and she acknowledges the harm caused, takes ownership over any mistakes she made, and is open to learning how to improve things in a non-defensive way. It’s also especially okay if Daniel can be forgiving and continue to see the two of them as on the same team.[1] Fostering a culture of forgiveness with room for some mistakes makes MCE work way, way better—it allows us to make the best decisions in expectation for one another without having to play defense for every choice we make.
Wait, what did you mean by “consent”?
Above, we mentioned “prioritizing consent” as a fallback to MCE, to be used when you aren’t sure that your models are correct enough, your care is strong enough, or your execution is foolproof enough.[2]
When the thing on the line is a surprise birthday party, there are costs to resorting to consent (the surprise is ruined), but it’s still an available option, and often the safer one.
But there are a lot of situations where getting someone’s consent or permission is impossible or insufficient, and MCE has helped us make better decisions in those cases.
I have a friend Balaam who has a very hard time saying no. If I ask him, “Would it bother you if I eat the last slice of pizza?” he will say “Of course that’s fine!” even if it would be somewhat upsetting or costly to him.
On the one hand, if I ask Balaam and he says it’s okay, most people would say I’ve discharged my obligations and haven’t done anything wrong, and it’s on him to let me know his preferences if he wants me to act on them.
But on the other hand, my goal here isn’t just to discharge my obligations. I care about Balaam and want what’s good for him, and what’s good for our friendship. Over time, maybe I can help him learn to say no and to be able to tell me what he needs, but until then, MCE can let me do a pretty good job.
I know Balaam well and usually know what would make him happy, and then I can use that information to weigh his preference against other values and needs, and come up with a plan I can realistically execute. I think that most of the time, this is better than me asking him for something he doesn’t want to agree to and him saying yes and us both feeling a little bad.
Relationships, Consent, and Sexual Harassment
Applying Model, Care, and Execution instead of explicitly seeking out consent also feels like a realistic description of how most relationships actually operate.
College orientation programs often say that you should not have any sexual contact with anyone unless they have affirmatively consented to that exact behavior. We think this is the right default practice for people who don’t know one another well, especially if they also don’t have much experience dating—and it’s reasonable for colleges to make simple, enforceable rules that work even for people who are not trying to behave well. We definitely don’t think college orientation should say, “The rule for sex is, first you think about if they would want to, then you think about if you care about them enough, then you think about if you can pull it off, and then you go for it.” Even if many people did well with that rule (which they might not), the handful of terrible people in any first year class would make that go terribly for everyone.
But people often make fun of the ways that this policy is very different from normal human relationship behavior. In real life, especially in long term relationships, people usually know what their partner wants based on their model of them, which includes both their prior knowledge and their constantly updated knowledge of the person’s body language and social signaling.
People often gloss this as just a variant of consent, but we actually think it is a different thing, and that is good in a lot of situations. Our preference, in high-trust relationships, is to apply a version of MCE rather than ask them for explicit verbal consent at every step.
Consider a situation in which Vashti feels harmed by a sexual interaction with Xerxes (including a broad range of possible situations here, from uncomfortable sex jokes to sexual assault). Usually there was a problem of at least one of MCE:
All of these are bad. It’s not okay to cause people sexual discomfort if you have bad social skills or are very impulsive. The harm to Vashti is real regardless of Xerxes’s intentions, and Xerxes is meaningfully liable for that harm.
The point here is that which type of failure mode is at play should inform how you try to address the problem, both for individual people who want to not sexually harass anyone, and for institutions trying to prevent or respond to problems.
If you have an employee who has been causing sexual discomfort because of model problems, it might actually be really helpful for someone to talk very explicitly with them. Saying “People usually don’t like it when you give them such tight hugs, please don’t do that anymore, try waving or giving a small head nod instead” can go a long way.
If their behavior is a result of a care problem, you can try to incentivize better behavior, such as by punishing violations. But this will not work well if the problem is model or execution: they will just be scared and still mess up, or overcorrect and avoid all interaction.
If it’s execution, treatments or situational fixes can help, such as telling this person they can’t drink at work events if their impulse control is very bad when they’re drunk, or providing treatment options for impulsivity.
If you are the person who is trying to avoid sexually harassing people, and you wish to prevent this, thinking through which of these failure modes is most likely to be a problem will help you set yourself up to make choices that ultimately live up to your values.
Disambiguating the three failure modes
So far, we’ve treated Model, Care prioritization, and Execution failures as distinct categories. Often, though, our failures are a combination of the above: if you didn’t think much about what someone would want, is that more like a model problem or more like care prioritization mismatch? How should we categorize things like “you always show up late for our dates” or “you didn’t take the time needed to improve your model of my risk thresholds before exposing yourself to Covid”?
A helpful differentiator is when and to what degree we endorse our action: when we look back at the situation afterwards, do we think we made the right choice given what we knew at the time?
In the case of model failures, I endorse my actions while taking them, but no longer do after a postmortem (i.e., once I have full knowledge, I wish that I would have acted differently).
For care prioritization mismatches, I endorse my actions while taking them, and often still endorse them after a postmortem with the person I hurt. If we’re both reasonable people, we might even end up agreeing that I made the right choice for the situation.[3]
Execution failures happen when I don’t endorse my actions even while taking them, much less after the fact.
How do you feel about your actions?
Endorsed while acting
Unendorsed while acting
Endorsed reflectively
Care prioritization
Your behavior is deeply confusing and you are beyond our help[4]
Unendorsed reflectively
Model
Execution
(If your instinctive reaction to the above definition of execution failures is “why on earth would a human ever act in a way that they do not presently endorse” then congratulations on your mental health levels far surpassing ours.)
Theories of societal failures
Scott Alexander, referencing a reddit comment by user no_bear_so_low, writes about Conflict vs Mistake, a framework for analyzing policy disagreements.
Mistake theorists see problems as resulting from misunderstandings, insufficient data, uninformed beliefs, et cetera, and that the antidote to these problems is research, debate, and intellectual progress.
Conflict theorists see problems as the result of adversarial values and fundamental disagreements between warring parties.
(These conveniently map onto their alliterative partners of “Model” and “Care”.[5])
When a famine ravages a country, Mistake theorists are more likely to chalk the problem up to a government failure to properly model agricultural conditions and put the necessary protections in place, while Conflict theorists believe a lack of political will or care for civilians led to needless death and destruction.
Execution failures, too, can happen at the societal level—maybe there just literally isn’t enough food available to feed your entire country, your government is too poor, there are sanctions on your economy, et cetera. Your government made the best decisions possible and deeply wanted prosperity for their people, but this was still insufficient.
Perfection
Recently, Eric Neyman pointed out to us that Model, Care, and Execution map neatly to the descriptions of God that lead to the problem of evil.
Specifically, how can there exist a supernatural God who is:
while there exist evil and suffering in this world. If God has all three of the above attributes, then God knows about evil, does not want evil, and has the capacity to make evil disappear, so how can we explain a world full of harm?
We love this parallel. If you never encountered any Model, Care, or Execution failures, you would be God, and none of us are. Our capacity for harm is part and parcel of our humanity.
Sometimes, we cannot get enough information to make good models, and we have to do our best to guess. Or we care about multiple people who have incompatible needs, and we will inevitably hurt someone. (We, ourselves, sometimes need to choose between our own needs and needs of others; those care problems can be particularly hard.) And there’s a lot our meat bodies and minds can’t do, so we’re definitely going to be unable to execute on many of the things we wish we could.
The world is marked by Divine failures of Model, Care, and Execution, and all of our failings are made in that image. It is our lifework, in the face of immense suffering, to band together to pursue Godliness and, bit by bit, reduce harm from within our midst.
Thank you to Shalhevet Schwartz, Drake Thomas, Eric Neyman, Phil Parker, and Ben Pace for valuable comments.
Of course, there are lots of situations in which you are not on the same team as someone else, and discerning them is pretty crucial here. In particular, it takes an especially high-trust environment to be able to notice and name care prioritization failures with the goal of resolving those together. “Mark stabbed Julie because he valued his preference for her being dead over her preference for being alive” technically falls in the “care prioritization” bucket, but Julie is unlikely to benefit from a postmortem, except perhaps in the literal sense of the word.
We’ll leave the question of where to set the bar for “enough” as an exercise for the reader.
My roommate Choni might endorse my decision to prioritize not getting punched in the face over waking him up, even though he still experienced the harm of missing his meeting. Or we might ultimately disagree about whether that was the right call, but still work to mitigate situations like this in the future, come to a compromise, or decide amicably that the roommate relationship isn’t worth preserving given the costs.
Ben Pace pointed out that sometimes, our intuitions and feelings and impulses take actions that we wouldn’t consciously approve of at the time, but were actually tracking something important. Heuristics that we’ve developed can be better at guiding us to the right decisions than our reasoning abilities, often in ways we’ll only end up appreciating far later down the line.
Thank you to Eric Neyman for making this connection