For years I (Elizabeth) have been trying to write out my grand unified theory of [good/bad/high-variance/high-investment] [jobs/relationships/religions/social groups]. In this dialogue me (Elizabeth) and Ruby throw a bunch of component models back and forth and get all the way to better defining the question. 🎉 

Elizabeth

About a year ago someone published Common Knowledge About Leverage Research, which IIRC had some information that was concerning but not devastating. You showed me a draft of a reply you wrote to that post, that pointed out lots of similar things Lightcone/LessWrong did and how, yeah, they could look bad, but they could also be part of a fine trade-off. Before you could publish that, an ex-employee of Leverage published a much more damning account.

This feels to me like it encapsulates part of a larger system of trade-offs. Accomplishing big things sometimes requires weirdness, and sometimes sacrifice, but places telling you "well we're weird and high sacrifice but it's worth it" are usually covering something up. But they're also not wrong that certain extremely useful things can't get done within standard 9-5 norms.  Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable. 

Ruby

Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable. 

 

Seems right.

I don't remember the details of all the exchanges with the initial Leverage accusations. Not sure if it was me or someone else who'd drafted the list of things that sounded equally bad, though I do remember something like that. My current vague recollection was feeling kind of mindkilled on the topic. There was external pressure regarding the anonymous post, maybe others internally were calling it bad and I felt I had to agree? I suppose there's the topic of handling accusations and surfacing info, but that's a somewhat different topic.

I think it's possible to make Lightcone/LessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. It'd be interesting to me figure out the diagnostic questions which get at that.

One differentiating guess is that while Lightcone is a high commitment org that generally asks a for a piece of your soul [1], and if you're around there's pressure to give more, my felt feeling is we will not make it "hard to get off the train". I could imagine if that the org did decide we were moving to the Bahamas, we might have offered six-months severance to whoever didn't want to join, or something like that. There have been asks that Oli was very reluctant to make of the team (getting into community politics stuff) because that felt beyond scope of what people signed up for. Things like that meant although there were large asks, I haven't felt trapped by them even if I've felt socially pressured.

Sorry, just rambling some on my own initial thoughts. Happy to focus on helping you articulate points from your blog posts that you'd most like to get out. Thoughts on the tradeoffs you'd most like to get out there?

(One last stray thought: I do think there are lots of ways regular 9-5 jobs end up being absolutely awful for people without even trying to do ambitious or weird things, and Lightcone is bad in some of those ways, and generally I think they're a different term in the equation worth giving thought to and separating out.)

[1] Although actually the last few months have felt particular un-soul-asky relative to my five years with the team.

Elizabeth

I think it's possible to make Lightcone/LessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. It'd be interesting to me figure out the diagnostic questions which get at that.

I think that's true, and diagnostic questions are useful, because a lot of the solution to this dilemma is proper screening. But less than you'd hope because goodness-vs-cost is not static. I think a major difficulty is that bad-cult is an attractor state, and once you have left the protections of being a boring 9-5 organization you have to actively fight the pull or will be sucked into it.

To use an extreme example: if the Reverend Jim Jones had died in a car accident before moving to California, he'd be remembered as a regional champion of civil rights, at a time when that was a very brave thing to be.  His church spearheaded local integration; he and his wife were the first white couple in their city to adopt a black child, who they named James Jones Jr. I really think they were sincere in their beliefs here. And if he'd had that car accident while in CA, he'd be remembered as a cult leader who did some serious harms, but was also a safe harbor for a lot of people fleeing the really fucked up cults. It wasn't until they moved to Jonestown in Guyana that things went truly apocalyptic. 

I think you're right that lots of normal jobs with reasonable hours and HR departments are harmful in the same way bad-cults are harmful. I also think grad school is a clearly cult and the military is so obviously the biggest cult that it shouldn't be a discussion (and this would be true even if it was used for 100% moral goals; the part about bombing people who don't deserve it is an almost entirely separate bad thing). But most cult checklists specifically exempt the military, and conventional clergy, and don't even think to talk about academia.  I think of cult (negative valence) as kind of a pointer for "abusive relationships that scale", and you can have shitty jobs that are harmful in same way as abusive jobs (but less so) in the same way you can have shitty relationships that are harmful in the same ways as abusive relationships (but less so). And it's easier to talk about the strongest versions, but it's useful to talk about in part because the problems are ubiquitous.

People often want to make this about intent, but I think one of the most important lessons is that intent is irrelevant in most aspects. Someone who is a demanding and unhelpful partner because they were severely injured in a car accident and are on enough pain killers to remove their inhibitions but not their pain is importantly morally different from someone who is demanding and unhelpful because they are an asshole. And you have more hope they will get better later. But if they're screaming insults at their partner I still expect it to hurt a lot. 

Elizabeth

To take this further: I think there's a good argument that jobs are coercive by default, especially if you live paycheck to paycheck and finding another job isn't instantaneous. They control your ability to feed and house your family, they're your social environment for more hours than anything else including your family, they greatly influence your ability to get your next job. That's a lot of power.

But it would be insane to say "no one is allowed to work until they have six months' expenses saved up". Because it's not really the job that's coercive, it's reality, and the job just happens to benefit from it. 

But they really can abuse that vulnerability. 

Ruby

it's not really the job that's coercive, it's reality

I had that thought as you began typing that jobs are coercive. I think this is a good point.

 

I think a major difficulty is that bad-cult is an attractor state, and once you have left the protections of being a boring 9-5 organization you have to actively fight the pull or will be sucked into it

I don't currently see that it is an attractor state, I don't have a model otherwise, but also don't have model/evidence that makes me see why you believe this. (Possible I can be convinced easily.)

 

I really think they were sincere in their beliefs here.

This makes me wonder about Player vs Character. I think the general point about intent not mattering all that much is key (I know my own good intent has not saved me from being a shitty friend/partner). 

I am interested in a question of at least "why do some people who seem to have good intentions slide towards very abusive/coercive behavior?" Thinking about when I caused the most harm, it was that my own needs and wants were very salient to me and were occluding my sense of other people's much less salient wants/needs, and that led to rationalization, and that led downhill.

I'm currently entertaining the thought that motivated cognition is vastly more pernicious than we treat it, and human reasoning ends up biased in any case where there are any degrees of freedom in belief (like the empirical evidence isn't smacking you in the face). And this explains a lot. Possibly here too, "good people" struggle to reason clearly in the face of incentives. The 9-5 job shapes incentives to avoid harm, remove those safeguards, and the self-serving reasoning has nothing to check it.
 

Elizabeth

I think Player vs. Character is a useful frame, but it’s also not necessary to my case for “intent is irrelevant” and there’s something important in that strong form.

Let’s say a good friend emotionally abuses you to get to donate a painful amount to their cause. Yelling, crying, calling you a terrible person. Morally, it matters a lot if that cause is their own bank account or an Alzheimers research charity, and if they’re going to somehow get credit for your donation. But I contend that someone who is devastated by their grandmother’s Alzheimers (and is pushing you to donate to a long term charity, so it's clearly not even going to help their grandmother, they just want to spare other people this devastation), and who is never going to tell anyone about your donation or their influence, is still going to do a lot of damage. 

Certainly their motivation isn't going to change the financial impact at all. If you donated next month's rent, you are equally fucked if it goes to Alzheimer's research or your friend's heroin habit or their European vacation. And the emotional impact of being cried at and insulted is similar. Maybe you take it better because you empathize with their pain, but that kind of thing will still leave a mark.

I guess you could argue this kind of emotional abuse only comes from unmet emotional needs, and so is always attempting to fulfill a selfish goal, but I feel like that pushes the word selfish beyond all meaning. 

Ruby

This seems right.

I'm wondering how an important variable when seeking something from someone else is how much and how well you are modeling and optimizing for the other person's wants and needs.

If you are using very tame means of persuasion (e.g. a salesperson on the sales floor just touting the benefits of this new vacuum cleaner and appealing to how much you probably hate vacuuming but this makes it easy), then you don't need to model the other person much because you're unlikely to steamroll their needs.

But if you have more leverage or are doing more extreme stuff (e.g. you're employer, partner, or crying/yelling), then whether this is good for the person will depend on how much you are actually, practically [1] caring about what they want/need and are weighing this against what you want. I suppose the check might be "would your actions change if the other person's needs changed". 

Reflecting back again on my own past cases of causing harm, my model is I generally believed I cared about the person, but my own needs were so great that I rationalized with little thought that what I was pushing for was what was best for them too, and didn't give much genuine thought to otherwise.

I think it's also easy to underestimate the amount of leverage or sway you have over others that might cause them to placate you instead of advocating their own needs. This can be true for employers and partners. An employer might think that if the employee really doesn't like it, they'll push back, not occupying the mental headspace of how soul-crushing and insecurity-producing job hunting is, or for that matter, the raw need for money.

In my case, I'm noticing that although I'm senior, skilled, well-paid and could likely get another job, I feel more locked in these days because of the large mortgage I've taken on. I could likely get paid the same amount elsewhere, but probably not the same amount plus equivalent amount of "meaning", which has made me realize I'm going to put up with quite a lot to stay at Lightcone.

[1] by "practically caring", I mean to differentiate from "felt caring" where you emotionally care, but this fails to connect to your reasoning and actions. 

Ruby

I'm interested in getting to any ideas you have for navigating this reality well, improving the trade-off, etc.

Elizabeth

Man, it's hard.

One of the easiest thing you can do is fire or not hire people who (will) find the job too challenging. That's really all a manager owes the employee- hiring someone doesn't give you a lifelong responsibility for their emotions. And you can even give them a generous severance, which removes a great deal of the financial risk.

But maybe they really want the job, in ways severance pay doesn't change. That can transform "I only want people here if it's genuinely good for them" into "convince me you're happy here or I'll fire you", which is a fertile ground for a lot of harm.

One big reason this can happen is that the employee is getting paid in meaning, impact, or future job opportunities, and don't see another path to getting these things. On a very mundane level: I've put up with conditions at impactful jobs I wouldn't have tolerated for bigtech bullshit jobs. And you can argue about whether I was right in those specific cases[1], but I 100% stand by the general equation that I should accept more pain while fighting against poverty or for vaccines than I should for Google advertising project #19357356. And I would be mad at anyone who pushed me to quit just because I was over my stress line for less important projects. 

I've known a number of members of a number of ~cults. Some of those cults had huge variance in how members turned out. Of those, the members who did well were the ones who went in with their own plans. They might have changed them once they got involved with the group, and the group might have been integral to their new plans, but if they lost the group they knew they'd be able to form and execute a new plan. This gave them both the ability to assess their groups' plans and to push to change them, secure in the knowledge of their BATNA. 

The people who got really hurt in those ~cults were the ones who were looking to the group to provide them meaning/purpose. And if you'd told them "you have to prove you're happy here or we'll kick you out", they would have faked happiness harder and been even more fucked up by the experience, even if that rule was 100% sincerely meant for their benefit. 

"if you're not happy we'll work very hard to make you happy" is better in some ways but worse in others. Participating in that process makes you quite vulnerable to the employer, and they need to be not just virtuous but skilled to make that pay off. 

And this is for the simplest solution to the simplest case. It gets much thornier from there.

One concept I'd love to popularize for reasoning about jobs in particular is a vulnerability budget. When people take on vulnerability to an org (in the form of a low paycheck, or living with co-workers, or working long hours on little sleep...) the impact isn't linear. I think for most people you get slightly worse and slightly worse until you cross the event horizon and get sucked into the black hole of a toxic work environment you cannot escape from or fix. In part because once things are that bad you lose the ability to notice or act on your preferences to change the environment. So no matter how good the trade looks locally, you shouldn't push people or let yourself be pushed to take on more vulnerability than your budget allows. 

[Different people have different budgets and different costs for a given condition]

Given that, the question is "how do you want to spend you(r employees') vulnerability budget?" You can pay them poorly or have a group live-work house, but not both. You can ask the for insane hours, but then you'd better be paying luxuriously. 

This is tricky of course, because you don't know what your employees' budget is and what a given request costs them. But I think it's a useful frame for individuals, and employers should be cooperative in working with it (which sometimes means giving people a clean no). 

Ruby

One of the easiest thing you can do is fire or not hire people who are finding the job too challenging. 

I don't think this is easy at all. Back to my previous point about motivated cognition: people already find it hard to fire people, even difficult, disruptive, and unproductive employees. Usually takes far too long. To get someone to conclude that a productive, albeit unhappy, employee should be fired? That's too hard. I expect a bunch of effort to go into solving the causes of unhappiness, but rationalized and possibly unable to accept deep intractable causes of the unhappiness that mean the person should leave.

And you list well reasons on the employee's end for not wanting to quit even if it's bad.

I'm more hopeful about approaches that just cause conditions to generally be better. I like this "vulnerability budget" idea. That seems more promising.

One thought is that the 9-5 ordinary professional workspace norms are just a set of norms about what's reasonable and acceptable. Perhaps what we need is just an expanded set of norms that's more permissive without being too permissive. Employers don't need to do complicated modeling, they just follow the norms. Employees know that if the norms are being broken, they'll get listened to and supported in having them upheld.

And maybe you'll say that the whole point is we want our orgs to be able to go be agentically consequentialist about all the things, and maybe, but my guess is there's room for norms like vulnerability budgets. 

An alternative norm I'd like to see in place is a broader cultural one around impact status. I think many people are harmed by the feeling they need a job with impact status at all times, just to be admitted to "society", and like it wouldn't be at all okay to go work on at nice big tech company if there's no good opening in an impact org that actually works for you. This means people feel they have very few BATNA organizations to go to if they leave their current one (I feel it!). If we can say "hey, I respect taking care of yourself and hence leaving a job that's not good for you even if it means less impact right now",  then maybe people will feel a bit more willing to leave roles that aren't good for them.

Pedantic note: I think we might need better language than "happy". Many people, myself included, don't think trying to be happy all the time (for some definition) is the right goal. I'd rather pursue "satisfaction" and "meaning". And once you're okay with feeling unhappy for the right cause, I think it takes extra work to know that some kinds of unhappy but meaningful are healthy and some are not, and that's tricker to tell than happy vs not.

Elizabeth

I know one of the difficulties for me personally is that there are certain trades of happiness for other goals I would want to make if they were instantaneous, but aren't safe if the unhappiness is prolonged specifically because I'm so good at tanking misery for a cause. Once I'm ~unhappy (which isn't quite the right word), I become very bad at evaluating whether I'm actually getting what I wanted or was promised, or what could be done on the margin to make things easier. So misery-for-impact is a high-risk maneuver for me I will usually only make when the deal has a very sharp end date. 

Which ties in to something else I think is important: feedback loops.

 

Perhaps what we need is just an expanded set of norms that's more permissive without being too permissive

My guess is this won't work in all cases, because norm enforcement is usually yes/no, and needs to be judged by people with little information. They can't handle "you can do any 2 of these 5 things, but no more" or "you can do this but only if you implement it really skillfully". So either everyone is allowed to impose 80 hour weeks, or no one can work 80 hour weeks, and I don't like either of those options.

I think there are missions that can't be handled well with a set of work norms. What I would want from them is a much greater responsiveness to employee feelings, beyond what would be reasonable to expect from a 9-5 org.  Either everyone agrees the job is a comfortable square hole and you will handle fitting yourself in it, or it's a complicated jigsaw and the employer commits to adapting to you as you adapt to them, and that this will be an iterative process because you have left behind the protection of knowing you're doing a basically okay thing.

You could call this a norm, but it's not one that can be casually enforced. It takes a lot of context for someone to judge if an employer is upholding their end of the bargain. Or an employee, for that matter. 

Elizabeth

I would dearly love for people to chill out about status, and separately, for people to assess status over a longer time horizon. I have a toy hypothesis that a lot of the hunt for status is a pica for deeper social connection, so all we need to do to make status less important is make everyone feel safe and connected.  

On the latter issue: Right now much of EA has a "you're only as good as your last picture" vibe. Even people who have attained fairly high status feel insecure if they're not doing something legibly impressive right this second[1]. This is bad for so many reasons. On the margin I expect it leads more people to do legible projects other people approve of, instead of going off on their own weird thing. It pushes people to stay in high status in-group jobs that are bad for them rather than chill at biotech somewhere (and I have to assume this pressure is much worse for people whose worst case scenario isn’t six figures at google). I taught at Atlas this summer, and was forever telling kids about my friends who worked for high status orgs and chose to stop.

 

[1]"High" probably means "mid" here. I think this is especially strong with borrowed status, where strangers light up when you say your employer's name but you don't have a personal reputation. 

Ruby

My guess is this won't work in all cases, because norm enforcement is usually yes/no, and needs to be judgeable by people with little information. They can't handle "you can do any 2 of these 5 things, but no more"or "you can do this but only if you implement it really skillfully"

 

Maybe, maybe not. I think our social/professional bubble might be able to do something at that level of sophistication. Like if there was a very landmark post saying "here be the norms" and it got lots of attention and discussion, I think after that, we might see people scrutinize orgs who have 80 work weeks for the 2 out 5 thing.

 

What I would want from them is a much greater responsiveness to employee feelings

The voice of Habryka in my head doesn't like this at all and is deeply concerned about the incentives it creates. I'm trying to remember his exact words in a conversation we had, but definitely something along the lines that if you reward people for suffering (e.g. giving them resources, etc.), people will want to signal suffering. (Also something like suffering correlating with low-productivity too.)

I do want to be sensitive to people's wellbeing, but grant that there's failure mode here.

This pushes me in the direction of clearly-labeled "join at your own risk" workplaces where it's stated upfront this work does not come with strong safeguards on your wellbeing, does not necessarily take pains to respect your boundaries, etc., – only join if you are a person who thinks they will reliably leave if it becomes unacceptable. An employer applying such a label to themselves should expect fewer applicants, but those applications should expect less support if they ever complain.

Possibly we want this with indications of how strong the warning is, e.g. Lightcone level vs a "live remotely with just your coworkers" level.

 

I have a toy hypothesis that a lot of the hunt for status is a pica for deeper social connection, so all we need to do to make status less important is make everyone feel safe and connected at all times. 

Oh, definitely a very good hypothesis. My model is our community is much higher on social insecurity and lower on deeper social community than many other social groups. I confess to having held a times a "if I am impressive/impactful enough, they will love me" belief. Also a "if I am good enough/moral enough/dedicated enough" belief.

 

all we need to do to make status less important is make everyone feel safe and connected 

Absolutely. I'm free next Tuesday if you wanted to knock that out together. 

One challenge that seems hard to me with that is non-fixed boundary of the community, i.e., we're a social group that continues to grow. And you can't just go handing out feelings of safety and connection to everyone [1], so you've got to gate them on something.

However, that might not be the core problem. Core problem might be that most people actually don't even know what this deeper connection is?

I've heard it complained that people don't want to be friends in this community, they only want to be [poly] partners and that might be related to this. People learning the only context you get deep connection and intimacy (or for that matter friendship) is within romantic relationships. This is a digression, but feels related to the lack of deep social connection and related status pica.

[1] That's how you get community centers of notoriety.

Elizabeth

What I would want from them is a much greater responsiveness to employee feelings

The voice of Habryka in my head doesn't like this at all and is deeply concerned about the incentives it creates [...] something along the lines that if you reward people for suffering (e.g. giving them resources, etc.), people will want to signal suffering

 

I don't know what to tell you here. I agree suffering olympics are bad for everyone involved, but if you're going to do weird experimental work shit you need to listen to how it affects people and be prepared to change in response to that data. 

I think it's fine to tell employees "look this is how we do this and you can adjust or leave", but there does need to be a feedback loop incorporating how things affect workers. 

To give you an example: I had a gig that I took with the understanding that it had long hours and an early start. I would have preferred otherwise, but was willing to work with it. Once I started it turned out that the position required more extroversion than either of us expected, and that I was physically incapable of of those hours and that level of extroversion. 

Their real choices here were to change the job or let me go. The right one depends on many things: if every staff member said the conditions were unsustainable, they should probably change the entire work environment. If I was the only one who couldn't hack it and there was a good replacement available, they should let me go, without a stain on anyone's honor. But often people will try to create a third option. "Can't you just..." when the employee has already said that they can't. It's that fake 3rd option I object to.

And that includes situations where the constraint is weaker than physical impossibility. Bosses do not need to keep employing people who refuse to work long hours, but they're not allowed to harass them, or keep manufacturing emergencies to increase their hours. 

Ruby

nod

Harassment definitely seems like a no-no, but also seems like a thing people would fail to notice themselves doing. No one thinks "I'll just harass them a bit to do what I want."

Meta: at this point in the convo, I'm not sure what we're aiming at. Maybe good to pick something.

Possibly it's drawing out more "and this what we think employers should do / norms get upheld". 

With regards to employee responsiveness, I think there are things that one might say are required, such as having periodic 1:1s with your staff so you know how they're doing, a clear exit policy/process, perhaps having severance even for someone leaving voluntarily. I'm not sure if this is interesting vs we should retrace our steps along the stack.

Elizabeth

What do you think of going into "why bad-cult is an attractor state" ? I feel like that's going to underlie a lot of what actions I think are useful. 

Ruby

Ah yes, I'm quite interested in that.

Elizabeth

An important piece of my model here is that ~everything bad[1] is driven by self-reinforcing cycles around bad attractors.

I've known senior software engineers with modest expenses and enormous savings living in a software hub, who were too drained by their jobs to go out and job hunt. The fact that the job is shitty is what kept them in it. And SSEWMEAESLIASF is the among the best possible job hunting situations any human being has ever experienced. 

Earlier I said "bad-cult is an attractor state". I want to spell that out now.

Let's say a group or company starts out basically good and slightly weird. 

  1. There is probably a leader. Someone can be a great, insightful leader when surrounded by people who are skeptical of them but prepared to listen, and terrible when surrounded by people too afraid or reverent to push back. Like the George Lucas effect but for group leadership.
  2. Some of this is because the leaders become more sure of themselves and more pushy, but some is because members treat them differently, sometimes against the leader's will. A lot of people want a parent figure to tell them what to do and they're the most likely people to seek out emotional wisdom gurus.
    1. I was going to say that this was less applicable to jobs than for emotional wisdom groups, and that's true in general but I think not for EA. Certainly someone who ~wanted to draw from that pool and didn't care that much about skills would find no lack of candidates who could be motivated by pseudoparental approval or the promise of meaning impact. 
    2. I think Eliezer has avoided this by aggressively refusing to be a parent figure, or community leader, or hit on anyone. I would call this a success, but he still ends up with a lot of people mad at him for refusing to be the savior/daddy they wanted. So some of this has to be a property of followers rather than leaders. 
    3. I've talked to a number of people who joined the in-person lesswrong community very early, and did shit that makes Nonlinear sound tame. Things like flying to a different country and sleeping on the floor to work for free. A decade later, they're all extremely happy with their choices and think they'd be much worse off if someone had "protected" them from that option. 

      There's a big selection effect here of course- the people who weren't happy left. But they still left instead of staying and writing callout posts. 

      My model is that the very early rationalist risk takers were motivated by something different than later risk takers. At the beginning there was no social group to get approval from, you would only go if you personally thought it was a good idea. I get the sense a lot of people in modern EA are seeking social approval/connection/status first, and the impact goals are secondary. 
      1. I think these people exist in rationality but it mostly shows up in being on the periphery and hoping for more. There isn't the same job ecosystem in which to seek approval. 
    4. When I look at, uh, let's say "groups with high variance in outcomes", the people who do well are usually the ones who had goals and plans before ever joining the group. They might change their plans upon contact, but they have some security in their ability to adapt if the group disappears. The people who do the worst are the ones who are dependent on the group to provide meaning and direction. 
      1. On a very mundane level; when I was a software engineer I job hopped more than any of my engineer friends. I wonder how much of that was downstream of getting fired fairly early in my career and being forcefully exposed to how easy job hunting was, and then contracting for a while to ingrain the skill. Interviewing was a leisure activity for me.  
  3. Another part is that emotional wisdom cults are often about helping people grow, and people who just had a growth spurt are very vulnerable and easy to manipulate.
    1. My archetype here is Neo immediately after he was brought out of the Matrix. He's suddenly found out everything he knows is wrong and is dependent on this group of strangers to feed and protect him; most people become more emotionally pliant in that situation. 
    2. EA doesn't have this directly but it does create ~"moral growth spurt" which may have similar problems. 
  4. Once a group gets labeled as cultish or untrustworthy, people treat members badly, worsening isolation, which is used to support the accusation of culthood. 
    1. I saw this happen with both Leverage and MAPLE. Members would go to parties, be asked normal, hard-to-avoid questions like "what do you do?", answer honestly, and the person talking to them would suddenly become less friendly and more suspicious. They might start interrogating the person, implicitly demanding they either condemn or wholly defend their org. Over time the group member goes to fewer and fewer non-group events, because they're tired of being harassed. 

      Even if you think the leadership of the group is malicious, this is an awful way to treat people you allegedly think are victims of abuse. 
  5. The longer you're in weird EA/x-risk jobs, the harder it is to go back to regular jobs. 
  6. If an org or group does something weird that violates one of your boundaries, and they say "oh, we do things differently because X", that erodes a little bit of your ability to set boundaries even if X is correct and the weird thing is highly useful with no real costs. The more this happens, the worse people become at noticing when they're uncomfortable and acting on it.
    1. Especially if you have few outgroup friends to tell you "wow that is fucked", or you've gotten used to tuning them out.  
  7. And of course, evaporative cooling

 

If you want to offer a job that will be positive EV for employees, and you're forgoing normal business norms, you need to actively fight the bad-cult attractor state. So I'd love to start debating specific norms, but one of my frames is going to be "how does this interact with the attractor state?"

 

[1] Please don't make me write out the caveats for this. 

Ruby

Some of these dynamics sound right, but something feels like it's missing.

First, I don't really want to spend much time defining "cult", seems worth tabooing the word slightly just to be on the same page. The relevant part of cult to me is something like: "a group situation where members end up violating their own boundaries/wellbeing for the sake of the group (or group's leader)".

Your claim is then that there's an attractor whereby when there's a group, there are forces that push it towards becoming a group where people's boundaries end up violated, presumably in order to extract more of something from them.

Your list so far feels more like it explains mechanisms that facilitate this rather than the core underlying cause. I'll summarize/paraphrase some of what I'm getting from you:

  1. Once in a state of violated boundaries/decreased wellbeing, a person can lose the ability to get themselves to leave, thus perpetuating the state
  2. Dynamics between leaders and followers allow for boundary violation, either because leaders are pushy or followers can be people who are just very vulnerable to it.
  3. People can also be vulnerable to suggestion if they've just had personal growth spurt (I think this is quite applicable to EA, etc. where there's the spurt of "I can do so much good if I approach it analytically / holy shit the world is on fire" (okay, wise person who gave me this insight, how do I do it?)
  4. Once people are in a state of boundary violation (or circumstances predictive of it), this can cause isolation that makes it further harder to get out of
  5. Evaporative cooling.

(1), (4) and [5. in your list] are ways that once you're in a cultish (boundary-violating/wellbeing-degrading situation), it's hard to leave. (2) and (3) just point at why people are vulnerable to it to start with. 

These, plus what I didn't quite paraphrase, don't seem to get at the attraction of these states though. But I think it's pretty simple but worth stating:

People are often working to get what they value. Company founders want their company to succeed. Emotional wisdom group leaders want their groups to succeed (followers, money, respect, etc) And so on. What people want is often in conflict and often zero sum (I can't have your time and money and you have it too). But the default is I try to get as much of what I want as I can, and there has to be something that stops me from taking stuff you've got that I want (money, time, your body, etc). Options include:

  • I care about what you want as part of what I want, so I wouldn't take from you in a way that harms you (empathy)
  • Societal rules and enforcement that prevent me harming you
  • Societal rules and enforcement that prevent me from doing things that heuristically might harm you
  • You being able to resist me harming you. Can be via avoiding harmful situation in the first place or via being able to leave if you find yourself in it
    • (societal rules often focus around preserving this)

I think empathy is a shakey mechanism on its own because of motivated cognition. Even if I do literally care about your wellbeing, my own desires can crowd out that awareness and rationalize that what I want is good for you too.

But yeah, according to this picture it's the default that I extract from you everything I can to get what I want, unless there's some force preventing me. One of the default forces preventing this is standard societal norms about workplace operation (these seem to be about preventing too much dependence which makes leaving harder).

And if you remove these counter-forces, then the default (me taking all I can get) prevails. And that's way "cultishness" is an attractor. It's something you've got to pump against.

To rephrase for the audience where we started, the tradeoff is that this protection is restrictive. There can be "false positives": things prevented that wouldn't have caused harm but would be very beneficial, or that at least have a benefit-to-cost ratio that all parties were okay with.

Maybe a good question for all orgs is "how easy is it for your employees to resist things that would be very counter to their preferences/wellbeing?" or alternatively "what mechanisms ensure that employee wellbeing isn't sacrificed to the goals of the org/company/leader/group?"

Ruby

An important personal observations/insight for myself this year was realizing that although I'm often a pretty empathetic person, i.e., I dislike the suffering others and like their joy/preference satisfaction, my empathy naturally becomes weak whenever there's a conflict between what I want and what others want (often when it's people around me!). Or if I'm having strong emotions about something, I become less able to be aware of other people's emotions even if I would have been quite attentive and caring about them if I myself weren't having strong feelings.

I predict this isn't uncommon and is at play when otherwise empathetic and caring people violate the boundaries and degrade the wellbeing of others.

Ruby

I want to refine it a bit more.

Please do!

 

The relevant part of cult to me is something like: "a group situation where members end up violating their own boundaries/wellbeing for the sake of the group (or group's leader)".

Perhaps to better reflect that we're talking about multiple social situations, I should rephrase this to say that an abusive social situation is one where one person ends up violating their own boundaries/wellbeing for the sake of the "other".

 

Second, there is the question of "violating their boundaries/wellbeing/values when". 

I think the "when" question becomes weaker if we elaborate on all the things that cults/abusive social situations erode:

  1. Conscious boundaries: e.g. boundaries a person usually maintains for their own wellbeing, values, etc. (this could be not working too many hours, being vegan)
  2. "Inherent boundaries": the things that the person requires for their healthy functioning and wellbeing whether or not they'd have listed it as a boundary, e.g. sleep.
  3. "Freedom from coercive incentives": uncontroversial notion of an abusive relationship is one where I threaten to hurt you if you leave, but it can be more subtle such as you are dependent on the relationship for any number of reasons, and this forces you to to stay despite it being bad.
  4. Resources to leave: e.g. stuff we discussed above.
  5. Added by Elizabeth: Example: having gay sex. If you grow up the wrong kind of Christian, having gay sex violates a moral boundary. From my (Elizabeth) perspective, that's incorrect, and helping people realize it is incorrect so they can have the consensual sex they want is a moral good. From their culture of origin's POV, that's corrupting them. And if they ever converted back to Christianity, I expect them to feel I from their time in godless liberal/libertarian world. 

My guess is only 2,3,4 of these hits the "when" problem, and the others you can judge about a person while they are in the cult/abusive relationship, even if they're angrily defending their desire to be in it. 

The moral belief/"moral boundary" one doesn't feel core to cults or something? Like from the original culture's perspective, some social entity has caused someone to end up with a false [moral] belief, but to me that's not the hallmark of a cult. People believe wrong things and try to convince others of wrong things. The problem seems lie in the methods used to get people to believe things, and keep them believing them. I can definitely conceptualize a social entity that takes people from a situation where they have false beliefs, and causes them to have correct beliefs except in an abusive way, and I'd still consider it a cult.

Elizabeth

5 definitionally hits the when problem (are they violating their own values? depends on when you ask), so I assume that's an edit issue.

I don't think it's obvious #1 doesn't hit when problems unless you define it very narrowly (in which case your list is incomplete), but I also don't think it matters. Not every harmful situation needs to hit every issue- if a leader is skillful enough (or a situation unlucky enough) to manipulate you entirely by making you think you want something, enough that you will protest loudly if anyone tries to take it away from you or criticize the leader, do we declare it definitely harmless? Do we define it as harmful only if the ~victim changes their mind and declares it harmful? 

Ruby

Feels like we're getting technical here and I'm not sure if it's crucial to the overall topic, but maybe.

Here's a thought: we can say that these are two kinds of "boundary violations":

(a) I cause you to change your beliefs via illegitimate means. For example, I apply strong incentives on you believing something like group acceptance, getting to have a job. Or I use drugs or sleep deprivation to make you amenable to a belief I want you to hold. Conversely, legitimate means of belief change is arguments and evidence that I myself find persuasive. (Kinda illegitimate but less so is is arguments and evidence that are knowingly biased or misleading.)

(b) I coerce you to contravene a current boundary that you have.

I then claim that we solve the 'when' problem by saying all boundary violations are of boundaries you have in the moment. If I changed your boundaries (e.g. making you think gay sex was okay) such that now you actually think something is okay, then your boundary is not being violated when you do it. However, the way in which your belief was changed may or may not have been a boundary violation.

If someone convinces you of something, you take actions because of it, later decide you think they were wrong and the action was wrong – I don't think that's inherently a boundary violation even though the you before and after think the action was wrong and wish you hadn't done it. It's important to me that convincing someone of something false is not inherently a boundary violation. Only the means make it so.

I think in practice, we see blends of the above. I half convince you of something via illegitimate means, e.g. group pressure, and then further pressure you into take the corresponding actions, e.g. by making group acceptance depend on it after I separately made you burn all your external social bridges. The boundary violation is a mix of shifting your beliefs illicitly and pressuring you to do something against what remains of your resistance to it.

Elizabeth

I like that definition; it captures most if not all of what I'm pointing at, and highlights our crux.

I think it makes sense to treat (B) as much more serious. And seems fine to not care much about (A) on an individual level; can't make an omelette, etc. But it is shockingly easy to make people report views they would previously and will in the future find repugnant, and against their own values and CEV. If you encounter one person reporting a terrible time with a group, that doesn't necessarily mean the group is at fault, and even if they are it might be a small enough problem for a big enough gain that you don't want to fix it. But if a group is wounding people at scale, it's a problem regardless of intentionality. Society can only carry so many of those without collapsing. And I think (A) is the more interesting and useful case to discuss specifically because it doesn't have any bright lines to distinguish it from desirable entities.

Ruby

I think the seriousness of (A) depends a lot on the severity of the means employed, and same for (B) really. Like if I lock you in a cellar, starve you, and sleep deprive you until you express a certain belief – that's hella serious.

But if I've understood your meaning, I agree that it's easy to get (A) at scale and can often be done more subtly than (B).  Also if you're good enough at (A), you don't really need to do (B) that much?

Habryka has an essay (not published I think?) that's about how you get people to do crazy stuff, including people in EA/Rationality, and it's like really just the extents that people will go to for social conformity/acceptance. Arguably, people do (A) to themselves a lot.

If I model an EA startup getting you to do a lot of stuff you previously wouldn't have agreed to, a major component is that they convinced you they violating those previous boundaries was correct, even to the extent that you'd be angry at someone trying to convince you otherwise again.

I can imagine an EA startup offering a generous severance even for people who choose to leave, giving you good references to wherever you want to go, etc., but having convinced employees that the Right thing to do is work hours too long for them, take a low salary, such that employees end up trapped by the beliefs. Maybe this is what you call the dependence on meaning.

Elizabeth

Yeah exactly.

One complication is that people are pretty variable. You and I both worked jobs with longer hours and lower salaries because the jobs were high on Meaning, and neither of us regret it. I'm >5 years out of that org and wasn't even that productive there, but I still think my reasoning was correct, and I believe you feel the same. I don't want us to lose access to actions like that just because other companies bully other employees, or EA as a whole memed people into accepting trades they don't actually want. 

Another complication is that people rarely have all the information. If I found out my former job was secretly kicking puppies or cheating customers, I would retroactively feel angry and violated about going the extra mile for them. That's an extreme example, but it's not that hard to think of ways my job could have pivoted that I would feel betrayed by, or at least wish I hadn't put so much work in.  

Ruby

I'm thinking there might be a hierarchy to be constructed here.

Something like:

  1. coercively violating people's boundaries
  2. failing to save people from themselves
  3. ensuring people don't hurt themselves

And then we have a question of is your obligation (as an employer, partner, guru, etc) to not do (1), or is the requirement that you're responsible for (3)?

And a challenge is once someone has been hurt, it's tricky to answer whether you actively hurt them vs failed to save them (or some mixture). 

Perhaps the answer is we want (3) but only to some reasonable finite level, and the question what is that level after which we're like "you did the reasonable stuff, if they got hurt, it's more on them". Classically that's the 9-5, separation of work and personal, etc.

Above you were calling for "check in with employees on how they're doing, fire employees if they seem to be taking too much damage" and stuff, which seems more like aiming for (3).

This is all an explorative frame here, not sure if this quite right.

Elizabeth

I think those are three spots but there are important spots between them, especially between 1 and 2. Like last time I think the most interesting and important part lies in the area of greatest complexity, so I'm going to zoom in on that. 

If you force someone to drink poisoned koolaid at gun point, that's clearly #1. If you lie to them about what it is, still #1. If someone brings poisoned koolaid onto your property with the express purpose of drinking it, that's clearly #2. 

But what if you leave the koolaid out, and they sneak into your yard and drink it? What if you properly labeled it "poison", but in a language they can't read? What if you leave out cans of alcohol that look and taste like soda, and a sober alcoholic drinks one by accident, ruining their sobriety and putting them into a downward spiral that takes years to recover from? What if the can was labeled as alcoholic if you paid attention, but you placed it with the soda cans and it blended in? What if you properly placed the alcoholic can but sometime during the 8 hour party someone, possibly a child, moved it in with the sodas? What if the sober alcoholic showed up in a terrible mood and you're pretty sure they were looking for an excuse to drink, or picked up a can that could be confused with soda so they could hide their consumption? 

What if it's a work event and you're up for the same promotion as an alcoholic. You benefit from him getting drunk. You'd never force booze down his throat, but is it your job to stop him from making a mistake? To keep the drinks sorted? At what point are you actively participating in creating a situation that hurts them and benefits you, vs. merely not taking responsibility for their actions?

I think the fundamental set of constraints is:

  1. we don't want people to benefit from harming others, even when it's not a conscious plan/their character's plan.  
  2. ...but that obligation can't be infinite, or nothing will ever happen. 
  3. a given activity can have wildly different impacts on different people.
  4. not everyone can be trusted to know how an activity will affect them, or be resilient if they are wrong. 
  5. A lot of obvious fixes make the problem worse. Telling people "hey this highly valuable, but only if you're extremely special. Otherwise it will be harmful" has a terrible track record. 

 

Elizabeth

I think #3 mostly doesn't belong on this scale, because it requires

  1. choosing someone's values for them
  2. coercing them into doing the right thing by the values you chose.

Doing this for adults is I think net-harmful even when you mean well, and it's a huge exploit for anyone who is less than totally beneficent. 

Ruby

With #3, I am also thinking of the kind of thing you advocated, or at least raised as possibilities, early on in this dialogue:

What I would want from them is a much greater responsiveness to employee feelings, beyond what would be reasonable to expect from a 9-5 org.  Either everyone agrees the job is a comfortable square hole and you will handle fitting yourself in it, or it's a complicated jigsaw and the employer commits to adapting to you as you adapt to them, and that this will be an iterative process because you have left behind the protection of knowing you're doing a basically okay thing.

 

One of the easiest thing you can do is fire or not hire people who (will) find the job too challenging. That's really all a manager owes the employee- hiring someone doesn't give you a lifelong responsibility for their emotions. And you can even give them a generous severance, which removes a great deal of the financial risk.

Firing someone when if they seem sufficiently unhappy feels like a "save them from themselves kind of thing". I think it's iffy and tricky, and maybe we only ever want level 2.5 on the scale, but then it's useful to point to this kind of thing and say that your obligation is "not that".

Ruby

But what if you leave the koolaid out, and they sneak into your yard and drink it? What if you properly labeled it "poison", but....

 

I agree with the complex questions here, I don't have thoughts yet. But am curious if you think this is a good frame to build on when it comes to figure out norms/requirements for jobs, relationships, and other cults.

Fwiw, my preferred relationship style (especially romantic but also friendship) does have a lot of #3. I want to save those I'm close with myself from myself. I don't know that choice of values is especially complicated there though.

Elizabeth

I think there is big difference between "take responsibility for all harm to befell strangers" and "refuse to be used to harm people you love, even if they say they want it."

I do agree that on a practical level, telling employees "if you're not happy we'll fire you" becomes a demand for emotional labor if someone needs the job (for money or meaning or anything else). You can try to handle this by only hiring people who don't need the job, but somehow my "jobs only for the independently wealthy" platform has failed to take off. 

More moderate examples:

  1. Retreats for meditation/rationality/hallucinogens are fantastically valuable for some people and life-ruining for others. How do you screen to make sure most participants find it beneficial? What’s the acceptable false positive rate?
  2. 1099s and gig work are often used to extract value from employees to employers. But I love freelancing and get angry every time a rule designed to protect me makes it harder to work the way I want to.
  3. Lots of people enjoy BDSM in a healthy way. But predators do use it as camouflage, and victims do use it to retraumatize themselves.  How should organized groups filter around this? How should individuals think about this when picking partners? What are the acceptable false positive and false negative rates?
  4. People vary a lot in how they want married life to look. Forcing your wife to be a SAHM against her will is awful; dividing home and paid labor in ways that leave both husband and wife happier are great. What if you told your then-gf you were career military so if you had children she’d have to solo parent a lot of the time, and she accepted but is now unhappy with the deal?
  5. I interviewed an ex-cult member here TODO LINK. His take was that it was strongly beneficial to him on net, and as soon as that stopped being true he left. He did pay a heavy price for his membership but was glad he’d done it. OTOH, there are enough unhappy ex-members that there are articles about this awful cult. What’s an acceptable success ratio?
Ruby

Digging beneath the examples and everything we've been discussing, perhaps the core questions are:
 

  • What restrictions do we place on people in order to prevent harm? This includes restrictions on voluntary agreements that people are allowed to enter into.
    • The restrictions might be "you can't do X" or "you can do X only so long as you also Y"
    • Restrictions can be softer like "norms" that people will judge you for violating, or actually legal rules that can get you imprisoned, fined, etc
  • Who gets to make the judgments about what is and isn't okay?
    • In making restrictions, we are saying that individual are not always capable of deciding for themselves. Or sometimes they are and sometimes they aren't, so do we want to assist the versions of themselves they somehow overall ~endorse?

In terms of restrictions, given that we're making them, big tradeoff is benefit of the many vs harms of the few. Are we going to restrict the freedom of the many to protect some number from [possibly] great harm?

I guess we've got two angles here: (i) you have to do something like pick an exchange rate between benefits/freedoms and restricting freedom to prevent harm, (ii) you can try to improve the Pareto frontier of the tradeoffs.

Getting  back to our particular examples, professional norms are these soft restrictions there at least in part to prevent harm. We might then say "but to do great things we must forego these protections" and then we can argue whether it's really worth it or if there are things that allow the benefit without immense cost if we do some other thing, e.g. you have to be paying attention to your employees and being responsive to them (do Y to do X) or we reduce the restriction a bit but not fully (you can do 2 or 3 out of these 5 things but not all of them). 

Providing warnings and making things much clearer opt-in feel like a Pareto-frontier pushing attempt.  Allow things but give people more info. I'd also include things like "get everyone to chill out about status" as an attempt to change incentives so people are less inclined to do things that are bad for them.

Ruby

I'm starting to brew some thoughts on interventions here, but need to think some more.

Elizabeth

I guess we've got two angles here: (i) you have to do something like pick an exchange rate between benefits/freedoms and restricting freedom to prevent harm, (ii) you can try to improve the Pareto frontier of the tradeoffs.

 

I think this is the heart of it, with the complication that the trade-offs and mitigation math vary a lot by person. Which is one reason large companies turn in to legible bureaucracies. But for small orgs asking extraordinary things from specific people, it does feel possible to do much better.

Ruby

Not to try to ignore the fundamental challenge, I'm curious what can be accomplished with Pareto-frontier expanding things like memes/education. Just one idea, for example, is that maybe small orgs are free to make their extraordinary asks, but we've broadcast a bunch of memes to people about understanding and defending their boundaries, this is what abuse looks like (akin to "this is what drowning looks like"). 

Ruby

A​s discussed in our side-channel, we maybe ought to conclude this document here and possibly continue elsewhere at some point.

In my ideal world I'd go back through everything and pull out the parts that were most interesting to me, but unfortunately I don't have the Slack right now and don't want to block publication any longer.

This has been very interesting though. I appreciate the opportunity to help you express your models, and a chance to develop more of my own on the topic. Cheers! Thanks, Elizabeth.

New Comment
10 comments, sorted by Click to highlight new comments since:

I think this is a good topic to discuss, and the post has many good insights. But I kinda see the whole topic from a different angle. Worker well-being can't depend on the goodness of employers, because employers gonna be bad if they can get away with it. The true cause of worker well-being is supply/demand changes that favor workers. Examples: 1) unionizing was a supply control which led to 9-5 and the weekend, 2) big tech jobs became nice because good engineers were rare, 3) UBI would lead to fewer people seeking jobs and therefore make employers behave better.

To me these examples show that, apart from market luck, the way to improve worker well-being is coordinated action. So I mostly agree with banning 80 hour workweeks, regulating gig work, and the like. We need more such restrictions, not less. The 32-hour work week seems like an especially good proposal: it would both make people spend less time at work, and make jobs easier to find. (And also make people much happier, as trials have shown.)

Seems to me that debates about (de)regulation often conflate two different things, which probably are not clearly separated but exist on a continuum. One is that people are different. Another is cooperation vs defection in Prisoner's Dilemma (also known as sacrifice to Moloch).

From the "people are different" perspective, the theoretical ideal would be to let everyone do their own thing, unless the advantages of cooperation clearly outweigh the benefits of freedom.

From the "Moloch" perspective, it would be best for the players if defection was banned/punished.

As an example, should it be okay for an employee to have a sexual relation with their boss? From the "people are different" perspective, hey, if two people genuinely desire to have sex with each other, why should they be forbidden to do so, if they are both consenting adults? From the "Moloch" perspective, we have just added "provide sexual services to your boss and pretend that you like it" to the list of things that desperate poor people have to do in order to get a job.

And both these perspectives are legitimate, for different people in different situations, and it is easy to forget that the other situation exists (and to have this blind spot supported by your bubble).

Simply asking people about their genuine preferences is not enough, because of possible preference falsification. Imagine the person who desperately needs the job -- if you asked them whether they are genuinely okay with having sex with their boss, they might conclude that saying "no" means not getting the job. People could lie even if a specific job is not on the line, simply because taking a certain position sends various social signals, such as "I feel economically (in)secure".

But if we cannot reliably find out people's preferences, it is not possible to have a policy "it is OK only if it is really OK for you", and without an anonymous survey we can't even figure out which solution would be preferable for most people. (In near future, an AI will probably compile a list of your publicly stated opinions for HR before the job interview.) So we are left guessing.

Interesting, your comment follows the frame of the OP, rather than the economic frame that I proposed. In the economic frame, it almost doesn't matter whether you ban sexual relations at work or not. If the labor market is a seller's market, workers will just leave bad employers and flock to better ones, and the problem will solve itself. And if the labor market is a buyer's market, employers will find a way to extract X value from workers, either by extorting sex or by other ways - you're never going to plug all the loopholes. The buyer's market vs seller's market distinction is all that matters, and all that's worth changing. The great success of the union movement was because it actually shifted one side of the market, forcing the other side to shift as well.

I agree that in long term, seller's market is the answer (and in the era of AGI, keeping it so will probably require some kind of UBI). But the market is not perfect, so the ban is useful to address those cases. Sometimes people are inflexible -- I have seen people tolerate more than they should, considering their market position they apparently were not aware/sure of. Transaction costs, imperfect information, etc.

One missing piece of context from this response is that a central case under discussion is the case where the employer is hypothetically aligned with the goals of its employees (as is often the case for small non-profits hiring heavily mission aligned employees).

By "hypothetically", I just mean that the employees (and likely the employeer) both think they are basically aligned in this way, but there are remaining concerns around mistakes, deception, bad general norms, etc.

Sure, but there's an important economic subtlety here: to the extent that work is goal-aligned, it doesn't need to be paid. You could do it independently, or as partners, or something. Whereas every hour worked doing the employer's bidding, and every dollar paid for it, must be due to goals that aren't aligned or are differently weighted (for example, because the worker cares comparatively more about feeding their family). So it makes more sense to me to view every employment relationship, to the extent it exists, as transactional: the employer wants one thing, the worker another, and they exchange labor for money. I think it's a simpler and more grounded way to think about work, at least when you're a worker.

So it makes more sense to me to view every employment relationship, to the extent it exists, as transactional: the employer wants one thing, the worker another, and they exchange labor for money.

I mean, this is certainly not the relationship I have with my employer.

Here is an alternative approach you could use which would get you closer to this:

  • (Non-profit and values ~aligned) employeers pay competitive wages or (ideally) pay in impact equity.
  • Employees adopt the norm of maximizing (expected) profit. They can donate this to a charity of interest. (Including donating it back to the charity they work at, but this isn't an expectation.)

This seems like a good approach naively, but unfortunately, I think there are a number of inefficiencies with wages and impact assessment that imply the costs here aren't worth the benefits in clarity.

I agree with you that UBI is the solution to 98% of labor condition issues, and that's a major reason I support it. But some fields pay primarily in some other currency (impact, social status, connections), so you'd also need UBsocialsupport, UBfeelingImattertotheworld, etc. 

[+][comment deleted]20

My guess is this won't work in all cases, because norm enforcement is usually yes/no, and needs to be judged by people with little information. They can't handle "you can do any 2 of these 5 things, but no more" or "you can do this but only if you implement it really skillfully". So either everyone is allowed to impose 80 hour weeks, or no one can work 80 hour weeks, and I don't like either of those options.

I think this might be wrong - for example, my understanding is that there are some kinds of jobs where it's considered normal for people to work 80-hour weeks, and other kinds where it isn't. Maybe the issue is that the "kind of job" norms can easily operate on lets you pick out things like "finance" but not "jobs that have already made one costly vulnerability bid"?