Errata: The Explicit and Implicit communication link is prefixed with 'localhost' instead of " https://www.lesswrong.com ". I am curious how this came about.
As text: http://localhost:3000/posts/PqbDgCtyRPYkYwHsm/explicit-and-implicit-communication
Great post.
This piece seemed like it was about being effective/making an organization effective, and how this tied into the word choice of robust didn't completely click until I read the beginning of the linked robust agents post again.
2. How do you choose people to be accountable to? What if you’re trying to do something really hard, and there seem to be few or zero people who you trust enough to be accountable to?
Perhaps, don't start with "accountability". Try a lower level.
1) A rubber duck that you explain things to.
2) A person that you talk about things with.
3) A person that you're accountable to.
(This might be easier to answer, or better answered, with a better model of accountability, and how to build trust.)
Huh, that's an interesting take (starting accountability with "explain yourself to a rubbery duck"). I think I like it.
Errata: The Explicit and Implicit communication link is prefixed with 'localhost' instead of " https://www.lesswrong.com ". I am curious how this came about.
Whoops. I sometimes am writing a post while using a local dev install of lesswrong.
I think that agency requires a membrane, something keeps particular people in and out, such that you have any deliberate culture, principles or decision making at all.
This TED talk Religion, evolution, and the ecstasy of self-transcendence, Jonathan Haidt talks about how having a membrane around a group is necessary for group selection to happen. This seems very related.
With out a membrane, the free rider problem can not be solved. If the free rider problem is not solved, then the group can not be fully aligned.
(I apologize in advance if this is too far afield of the intended purpose of this post)
How does the claim that "group agents require membranes" interact with the widespread support for dramatically reducing or eliminating restrictions to immigration ("open borders" for short) within the EA/LW community? I can think of several possibilities, but I'm not sure which is true:
Context: I have an intuition that reduced/eliminated immigration restrictions reduce global coordination, and this post helped me crystallize it (if nations have less group agency, it's harder to coordinate)
This definitely seems like a relevant application of the question (different specific topic that I was thinking about when writing the OP, but the fully fleshed out theory I was aiming at would have some kind of answer here)
My off-the-cuff-made-up-guess is that there is some threshold of borders that matters for group agency, but if your borders are sufficiently permeable already, there's not much benefit to marginally increasing them. Like, I notice that I don't feel much benefit to the US having stronger borders because it's already a giant melting pot. But I have some sense that Japan actually benefits from group homogeneity, by getting to have some cultural tech that is impossible in melting-pot cities.
(I notice that I am pretty confused about how to resolve some conflicted principles, like, I do in fact think there's a benefit to open borders for general "increased trade and freedom has a bunch of important benefits" reasons, and I also think there are benefits to careful standards for group membership, and I don't know how to trade those off. I think that a lot of cultural benefits are really hard to pull off)
I think the people who are anti open borders are thinking roughly in terms of "we can't coordinate on having particular principles if anyone can just show up whenever" (although many of them don't think through any explicit game theory).
Is the thought behind "Wait as long as possible before hiring people" that you will be better able to spread values to people when you are busier, or that you can hire a bunch of people at once and gain economy of scale when indoctrinating them?
Because the naive view would be to hire slowly and well in advance, and either sync up with the new hires or terminate them if they can't get into the organizational paradigm you're trying to construct, and that requires more slack.
My interpretation of YC's original claim is something like: there's a binary jump between "you're a few friend-colleagues working together in a basement" and "you're a company that has begun to have to build bureaucracy for itself", and as soon as you start doing that you incur a huge cost, both in the sheer time you have to spend managing people, and in your ability to coordinate pivots.
I think I was making a fairly different point that was more "inspired by" this point than directly connected to it.
It's not a perfect example for the topic, as the actual reason for the advice is to avoid ANY ongoing monetary commitment for as long as possible. No payroll, no vendor contracts, nothing that creates a monthly "nut" that drains your capitol before you have actual revenue streams.
It's also the case that employees come with agency alignment problems, but that's secondary.
I'd originally called this "Becoming a Robust Person or Organization", but then was noticing that this suggested the truncation "Robust Organization" as shorthand for "what an organization looks like when it's a robust agent." And that didn't feel like it had the right connotations.
I'm still not sure of a great phrase. "Agentic Organization" has differently slightly-off connotations. "Robust Agentic Organization" does actually get it across maybe but I don't believe anyone will ever say it. :P
Epistemic Status – some mixture of:
tl;dr:
People are not automatically robust agents, and neither are organizations.
An organization can become an agent (probably?) but only if it’s built right. Your default assumption should probably be that a given organization is not an agent (and therefore may not be able to credibly make certain kinds of commitments).
Your default assumption, if you’re building an organization, should probably be that it will not be an agent (and will have some pathologies common to organizations).
If you try on purpose to make it an agent, have good principles, etc…
...well, your organization probably still won’t be an agent, and some of those principles might get co-opted by adversarial processes. But I think it’s possible for an organization to at least be better at robust agency (and, also better at being “good”, or “human value aligned”, or at least “aligned with the values of the person who founded it.”)
Becoming a robustly agentic person
For a few years I’ve been crystallizing what it means to be a robust agent, by which I mean: “Reliably performing well, even if the environment around you changes. Have good policies. Have good meta policies. Be able to interface well with people who might have a wide variety of strategies, some of who might be malicious or confused.”
People are not born automatically strategic, nor are they born an “agent.”
If you want robust agency, you have to cultivate it on purpose.
I have a friend who solves a lot of problems using the multi-agent paradigm. He spends a lot of effort integrating and empowering his sub-agents. He treats them like adults, makes sure they understand each other and trust each other. He makes sure each of them have accurate beliefs, and he tries to empower each of them as much as possible so they have no need to compete.
This… doesn’t actually work for me.
I’ve tried things like internal double crux or internal family systems, and so far, it’s just produced a confused “meh.” Insofar as “sub-agents” is a workable framework, I still have a pretty adversarial relationship with myself. (When I’m having trouble sleeping or staying off facebook, instead of figuring out what needs my sub-agents have and meeting them all... I just block facebook for 16 hours a day and program my computer to turn itself off every hour of the night starting at 11pm)
I'm tempted to write off my friend's claims as weird-posthoc-narrative. But this friend is among the more impressive people I know, and consistently has good reasons for things that initially sound weird to me. (This shouldn't be strong evidence to you, but it's enough evidence for me personally to take it seriously)
I once asked him “so… how do you even get your sub-agents to say anything to each other? I can’t tell if I have sub-agents or not but if I do they sure seem incoherent. Have you always had coherent sub-agents?”
And he said (paraphrased by me), something like:
This… was an interesting outlook.
The jury’s still out on whether sub-agents are a useful framework for me. But this still fit into an interesting meta-framework.
Subagents or no, people don’t stop growing as agents when they become adults – there’s more to learn. I’ve worked over the past few years to improve my ability to think, and have good policies that defend my values while interfacing better with potential allies and enemies and confused bystanders.
I still have a lot more to go.
Becoming a robust organization
People are not automatically robust agents.
Neither are organizations.
Whether or not sub-agents are a valid frame for humans (or for particular humans), they seem like a pretty valid lens to examine organizations through.
An organization is born without a brain, and without a soul, and it will not have either unless you proactively build it one. And, I suspect, you are limited in your ability to build it one by the degree of soul and brain that you have cultivated in yourself. (Where “you” is “whoever is building the organization”, which might be one founder or multiple co-founders)
Vignettes of Organizational Coherence
Epistemic Status: Somewhat poetry-esque. These vignettes from different organizations paint a picture more than they spell out an explicit argument. But I hope it helps express the overall worldview I currently hold.
Holding off on Hiring
YCombinator recommends that young startups avoid hiring people as long as possible. I think there are a number of reasons for this, but one guess is that you’re ability to grow the soul of your organization weakens dramatically as it scales. It’s much harder to communicate nuanced beliefs to many-people-at-once than a few people.
The years where your organization is small, and everyone can easily talk to everyone… those are the years when you have the chance to plant the seed of agency and the spark of goodness, to ensure your organization grows into something that is aligned with your values.
The Human Alignment Problem
Ray Dalio, of Bridgewater, has a book of Principles that he endeavors to follow, and have Bridgewater follow. I disagree (or are quite skeptical about) a lot of his implementation details. But I think the meta-principle of having principles is valuable. In particular, writing things down so that you can notice when you have violated your previously stated principles seems important.
One thing he talks a lot about is “getting in sync”, which he discusses in this blog post:
The Treacherous Turn
This particular description about the treacherous turn (typically as applied to AI, but in this case using the example of a human) feels relevant:
I’m not sure the metaphor quite holds. But it seems plausible that if you want an organization where individuals, teams and departments don’t lie (whether blatantly and maliciously, or through ‘honest goodhart-esque mistakes’, or through something like Benquo’s 4-level-simulacrum concept), you have some window in which you can try to install a robust system of honesty, honor and integrity, before the system becomes too powerful to shape.
Sometimes bureaucracy is successfully protecting a thing, and that’s good
Samo's How to Use Bureaucracies matched my experience watching bureaucracies form. I’ve seen bureaucracies form that looked reasonably formed-on-purpose-by-a-competent-person, and I’ve seen glimpses of ones that looked sort of cobbled together like spaghetti towers.
An interesting viewpoint I’ve heard recently is “usually when people are complaining that Bureaucracies don’t have souls, I think they’re just mad that the bureaucracy didn’t give them the resources they wanted. And the bureaucracy was specifically designed to stop from people like them from exploiting it.
I’m not sure how often this is actually true and how often it’s just a convenient story (bureaucracies do seem to be built out of spaghetti towers). But it seems plausible in at least some cases. And it seems noteworthy that “having a soul” might be compatible with “include leviathanic institutions that don’t seem to care about you as a person.”
Sabotaging the Nazis
On the flipside...
LW user Lionhearted notes in Explicit and Implicit communication that during World War II, some allies went to infiltrate the Nazis and gum up the works. They received explicit instructions like:
And... well, this all sure sounds like the pathologies I normally associate with bureaucracy. This sort of thing seems to happen by default, as an organization scales.
There's also Scott's IRB Nightmare.
Organizations have to make decisions and keep promises.
Why can’t you just have individual agents within an organization? Why does it matter that the organization-as-a-whole be an agent?
If you can’t make “real” decisions and keep commitments, you will be limited in your ability to engage in certain strategies, in some cases unable to engage in mutually beneficial trade.
Organizations control resources that are often beyond the control of a single person, and involve complicated decision making procedures. Sometimes the procedure is a legible, principled process. Sometimes a few key people in the room-where-it-happens hash things out, opaquely. Sometimes it’s a legible-but-spaghetti-tower bureaucracy.
Any of these can work. But regardless, the organization can have access to resources beyond the sum-of-the-individual people involved. But if the organization isn't coherent, it can struggle to make credible promises that are necessary to trade. (This might work a couple times, but then trading partners may become more skeptical)
Possible failure modes:
Sometimes nobody has any power – everyone requires too many checks from too many other people and long term planning can't happen on purpose.
Sometimes you talk to the head of the org, and maybe you even trust the head of the org, and they say the org will do a thing, but somehow the org doesn’t end up doing the thing.
Sometimes, you can talk to each individual person at the org and they all agree Decision X would be best, but they’re all afraid to speak up because there isn’t common knowledge that they agree with Decision X. Or, they do all agree and know it, but they can’t say it publicly because The Public doesn’t understand Decision X.
So Decision X doesn’t get made.
Sometimes you talk to each individual person and they each individually agree that Decision X is good, and you talk to the entire group and the entire group seems to agree that Decision X would be good, but… somehow Decision X doesn’t get done.
I think it makes sense for bureaucracies to exist sometimes, and to have the explicit purpose of preventing people from exploiting things too easily. But, it’s still useful for some part of the institution to be able to make decisions and commitments that weren’t part of explicitly-laid-out bureaucracy chain.
Porous movements aren’t and can’t be agents
I think that agency requires a membrane, something keeps particular people in and out, such that you have any deliberate culture, principles or decision making at all.
Relatedly, I think you need a membrane for Stag Hunts to work – if any rando can blunder into the formation at the last moment, there’s no way you can catch a stag.
Organizations have fairly strong membranes, and sometimes informal community institutions can as well. But this is relatively rare.
So while I’m disappointed sometimes when particular individuals and organizations don’t live up to the ideals I think they were trying for… I don’t think it makes much sense to hold most “movements” to the ideal of agency. Movements are too chaotic, too hard to police, too easy to show up in and start shouting and taking up attention.
Instead, instead, I think of movements as a place where a lot of people with similar ideals are clustered together. This makes it easier to find recruit people into organizations that do have membranes and can have principles.
Narrative control and contracts, as alternative coordination mechanisms
Another friend who ran an organization once remarked (paraphrased)
This seems plausible to me. But importantly, I don’t think you get “uphold contracts” as a virtue for free. If you want your employees to be able to do it reliably, you need mechanisms to train and reinforce that. (I think if you recruit from some homogenous cultures it might come more automatically, but it’s not my default experience)
Integrity and Accountability
Habryka recently wrote about Integrity and Accountability, and it seemed useful to just quote the summary here:
Open Problems in Robust Group Agency
Exercises for the reader, and for me:
1. How do you make sure your group has any kind of agency at all, let alone be ‘value-aligned’
2. How do you choose people to be accountable to? What if you’re trying to do something really hard, and there seem to be few or zero people who you trust enough to be accountable to?
3. It seems like the last cluster of people who tried to solve accountability created committees and boards and bureaucracies, and… I dunno, maybe that stuff works fine if you do it right. But it seems easy to become dysfunctional in particular ways. What’s up with that?
3. What “rabbit” strategies are available, within and without organizations, that are self-reinforcing in the near term, that can help build trust, accountability, and robust agency?
4. What “stag” strategies could you successfully execute on if you had a small group of people working hard together?
4b. How can you get a small group of dedicated, aligned people?
5. How can people maintain accurate beliefs in the face of groupthink?
6. How can any of this scale?