jacobjacob

One of my favorite pastimes is to study historical examples of teams who accomplished ambitious projects effectively. Recently on this topic I wrote the post "A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX"

In this dialogue, I'd like to discuss a paticular clue I have about making effective teams work: the abstraction of a "low-dimensional interface". I believe this is quite key, but from raising it to you in person, it seems you don't believe that!

So let's go ⚔️

I will make a rambly opening statement, that I then expect to get unpacked as we go. My claim is: in order to design and engineer an artefact, you need to move through a high-dimensional solution space. The more constraints you have, the harder it will be to find something. So you'll need a lot of flexibility in easing and tightening various constraints, and trying different things. (I think people who build stuff will resonate with the experience that changing one part of the system design will often have subtle ramifications in a different part.) 

All these design trade-offs happen best inside of a single head, that can hold all the context and trade-offs at the same time, and make design choices that track the dependencies better than can be done between two people. 

So, when I say that you should "carve team boundaries at low-dimensional interfaces"; I also mean that you should "carve project ownership / leadership in a way that's maximally inclusive of the high-dimensional dependencies the project owner will have to orient around". 

In addition to that, low-dimensional interfaces serve the important function of making it harder for stupid requirements or idiotic decisions to propagate through to other parts of the organisations. (Whereas if you have high-dimensional interfaces, someone might decide on something that'll end up really messing you up!)

kave

My immediate response: needs existence proof. Low-dimensional interfaces seem great, to the extent that you can find them. If you can factor a task into a bunch of subtasks with clear success criteria and no interplay among the dependencies, you're in a fantastic position.

Here are a couple of problems I anticipate:

  1. It's not obvious beforehand how to split the task up. When you initially try, you will fail.
  2. There will be a lot of push from the manager to act as if the low-dimensional interfaces are true.

    Peon: I'm not sure exactly what to do here. I think I want to check in with Team Make-A-Big-Pile-Of-Rocks to see how they're thinking about things.
    Glorious Manager: Well we've separated things into low-dimensional interfaces, so you don't have to worry about Team MABPOR except for making sure that your widgets can all zorble. That's all I'm asking for you to do with Team MABPOR, so you should just be able to go full steam ahead.
    Peon: Yeah, I feel like there might end up being more cross-dependencies
    Glorious Manager: Oh, well if there's some way in which we haven't carved low-dimensional interfaces, I'm very interested to hear it. Do you have any specific things that you're worried about that Team MABPOR will secretly depend upon from you?
    Peon: No nothing specific.
    Glorious Manager: OK, so you should go as fast as possible, right?
    Peon: Well I thought I'd begin by gathering all the rocks we own and getting them out of the way, maybe paying someone to remove them all. It's a bit expensive, but it does fit within the budget for our teams ... and I was also thinking about getting a giant resonator to test all the zorbles. It will make the whole office building constantly shake, but I guess that doesn't directly impact any of the low-dimensional interfaces
  3. You will fail horribly at questioning enough requirements or figuring out elegant, cross-boundary solutions, because you will have put everything inside black boxes.

Instead, it seems better to me to try and do things like (a) fix inter- and intra-team communication, (b) trying to take actions that are sensitive to undiscovered requirements and dependencies. Yes, some optimisations work better inside closed systems, but you're giving up on that to a large extent when you work as a team. I think you can eventually pay a devilish price of scaling to low-dimensional interfaces, but you will be a lot less efficient (and will gain parallelism).

jacobjacob

I have a few thoughts... but my main response: if you want to direct 10+ people to solve a problem, you need some way to split things up. For example: 

  • Early SpaceX carved things along structures, avionics, propulsion, and launch as key areas of ownership 
  • US Armed Forces sometimes carve responsibility along geographical lines, with a single commander having authority over everything inside the boundary
  • a building project will split general contracting into for example foundation, mechnical, electrical, plumbing, framing, drywall as abstractions; and the design split into interior design, civil engineering, structural engineering, landscaping, architectural, and more 

I claim that, surely, there's some Art and Science to this "abstraction carving" motion. And given that, I pose to you the question: 

What heuristics should managers use to carve these abstractions? 

I propose "carve along low-dimensional interfaces" as a key heuristic for managers. You say its not obvious how to do it: and I agree. It's a substantial, object-level, tricky problem, and one of the most important ones the manager will face. 

In addition, you say it will "fail horribly at figuring out cross-boundary solutions". I actually claim the opposite! One of the points of low-dimensional interfaces, is that you have high-dimensional dependencies inside your carved abstraction, given the designer a lot of flexibility to find surprisingly elegant points in optimisation space. 

I'd like to end by quoting the great grug brain

early on in project everything very abstract and like water: very little solid holds for grug's struggling brain to hang on to. take time to develop "shape" of system and learn what even doing. grug try not to factor in early part of project and then, at some point, good cut-points emerge from code base

good cut point has narrow interface with rest of system: small number of functions or abstractions that hide complexity demon internally, like trapped in crystal

grug quite satisfied when complexity demon trapped properly in crystal, is best feeling to trap mortal enemy!

kave

Things I think you and I agree on:

  • People need to work on separate parts of the problem at the same time
  • If a task is decomposed along low-dimensional interfaces, that is great
  • People can get better at choosing good task decompositions (which might be good because of their low-dimensional interfaces or for other reasons)
  • Many fields have standard decompositions that work pretty well

I think we disagree on:

  • Creating decoupling between teams is tractable by a manager thinking hard. (I would probably go as far as to say: attempting to create decoupling and leverage that presumed decoupling will often cause more harm than good)
  • How much marginal value there is in getting task decomposition right upfront (for a small team), vs fluidly adapting them

Do you agree that leveraging the attempted decoupling is likely to cause a lot of problems? I think if you do, we don't disagree about anything that substantive. If not, I think that's where the disagreement is.

What heuristics should managers use to carve these abstractions?

I think I should try and answer this question, but I am tempted to say "whatever, it's not where the juice is". Let me spend 2 minutes thinking.

Here are some thoughts: I do think it's nice to have low-dimensional interfaces. My inner John Wentworth tells me "almost every possible decomposition is way less decoupled than one a human would pick". And that seems right. Here are some more things that I came up with to pay attention to:

  • Roughly equal work profiles during the project (e.g. not one person working on something frontloaded and someone else on something backloaded)
  • Chunk things smaller if you have completed fewer projects in that domain
  • Based on team members' skills and proclivities

One of the points of low-dimensional interfaces, is that you have high-dimensional dependencies inside your carved abstraction

Sure. But my worry is that in practice having a decomposition that is labelled as decoupled will lead you to miss out on solutions that work by coupling parts across the claimed decomposition.

I'd like to end by quoting from your quote of the great grug brain:

grug try not to factor in early part of project and then, at some point, good cut-points emerge from code base

jacobjacob

Speaking of grug, he brings up an interesting instance of problem decomposition, which can also serve as a case study: web development splitting things into style (css file), markup (html file) and logic (javascript file). A split that strikes me as, contrary to my proposition, carves some quite high-dimensional interfaces! 

Do you agree that leveraging the attempted decoupling is likely to cause a lot of problems?

Nope, I do not. So that's a disagreement. 

However, I'd like to not directly reply to this (and your other) claims, but instead strike at the core and attempt to add some juice

In particular: I have a hunch that the principle of low-dimensional cuts has explanatory power for a whole range of puzzles I've observed in practice. Consider, for example:

  • When building or designing physical spaces (like in interior design or construction work), sometimes it really feels hard to think about a space you’re not physically in, and, in addition, one where you're not able to move the furniture around and try different things
  • When discussing a problem with a building remotely, over slack or phone, the situation might seem stuck. But when you go to the actual work-site, look at things, and start discussing with a contractor, a lot of new solutions, and "third ways", appear. These high-scoring points in optimisation space didn't appear when you were elsewhere
  • Thorny and confusing disagreements sometimes go down really poorly over text, but really well in person
  • Having reports "question the requirements", and having requirements pass through a human brain, being attached to a responsible person, makes for less stupid designs
  • I claim that cross-functional teams often design and engineer better products faster than specialised teams. Same for in-person vs. remote teams

Why? 

I think these are some of the most important empirical phenomena I've observed about what enables successful designing, engineering, and building. Often it's felt to me that these aren't isolated phenomena, but that there's something similar about them... and I think having shower thoughts about this was part of what made me formulate the hunch that "low-dimensional cuts" somehow served as a unifying theme. But in thinking over this comment -- lo and behold, this is a real-time update -- I realise that I instead want to posit an upstream principle. Perhaps this will be one on which there is more agreement: 

You want the high-bandwidth channels in your organisation to line up with the high-dimensional dependencies in your problem space. 

Designing a space is easier while physically present, insofar as there's too many degrees of freedom to be able to be summarised succinctly over text, and the highest bandwith channel to process those is inside of a single brain that also understands the other dependencies in the problem space, and what tradeoffs are permissible. Disagreements go better in person insofar as the highest-dimensional dependency is being able to traverse the space of possible negotiations / solutions in a way requiring detailed back-and-forth to understand the other's position. 

This is juicy to me because various contortions of this generator seems to explain a bunch of the important puzzles.

But in an organisation, you're constrained on high bandwidth interfaces. Only so many people can share an office. Only so many people can be on a team. Each manager can only have so many responsibilities. Hence you need heuristics for making these trade-offs. 

Curious if this seems closer to where the juice is? More agreeable? Or similarly objectionable?

kave

I basically agree with You want the high-bandwidth channels in your organisation to line up with the high-dimensional dependencies in your problem space (though I might flip it around so the emphasis is more on "for each dependency, a channel" and less on "for each channel, a dependency").

I feel a little chafe at:

But in an organisation, you're constrained on high bandwidth interfaces. Only so many people can share an office. Only so many people can be on a team. Each manager can only have so many responsibilities. Hence you need heuristics for making these trade-offs.

I think this presumes the high-bandwidth interfaces are static. For novel (to a team or manager) domains, I claim you should mostly be optimising for moving the high-bandwidth interfaces to where they need to be at a given time (which involves a lot of figuring where exactly that is). Like this slime-mold covers everything with its body (high-bandwidth) first and then prunes away to the connections that are needed.

Slime mold networks – Mark Fricker: Research
 

I also think you need some art of "working in domains where you don't know the dependencies yet". I don't feel like I yet know the art, to be clear. It seems likely that an important part is running forward as quickly as possible with your best guess of the uncoupled parts. But I think an important part is also not making costly-to-reverse decisions based on your model of the uncoupling.

jacobjacob

That slime-mold graphic and metaphor is really cool! And I'm excited that it seems you agree more with the new formulation of the high-bandwidth criterion. 

I also agree with a bunch of your claims in principle: and I think most of my urge to know centers around how to accomplish those things in practice

Overall, it's been a month since we started this exchange, and it did hit on some interesting things, so I think it's time we just go ahead and publish it, and then see if and when we end up sending follow-ups. 


Somewhat tangentially, as we post this, I'd like to share some excerpts that appeared relevant to our discussion, from a book I recently read parts of: Industrial Megaprojects: Concepts, Strategies, and Practices for Success. It's written by a former economist who runs a consultancy for the oil, gas, chemical and mining industries. He collected data about all his clients into a database of 13,000 projects, and runs a lot of t-tests on them. (Out of those, about 300 are megaprojects, that is, ones costing $1B or more.)

He says:

The biggest driver of increasing core team size requirements is the number of subprojects involved in the development. It is not the project size per se that drives team size; it is the subprojects. For example, a $2 billion liqueified natural gas (LNG) train addition at an existing site without new upstream gas supply requirements is a fairly simple project and can be handled by a core team of 20 to 35 people. Increasing the project cost by building two new trains instead of one does not increase the personnel requirements. Conversely, a $2 billion grassroots chemical complex using the new technology for one of the primary units may require 50 percent more people. The project will likely be roken into at least two subprojects, which inceases the core team size by about 10 to 12, and an R&D cadre will be required that increases the core team by 5 to 10. 

(p. 167)

I am intrigued, because I think this is another interesting corollary made by my "low-dimensional cuts" heuristic. A natural question is: when should you add a middle-manager? And the answer is: whenever you find in the domain enough recursive substructure, that you can carve for the middle manager a domain of ownership, such that domain also contains low-dimensional cut points within itself. (I don't yet know how to formulate this corollary using the new high-bandwidth phrasing of the heuristic.)

Another interesting quote: 

Interface management is an issue for even small projects, but it is a major issue for megaprojects. By the time a typical megaproject is completed, there will have been hundreds of organizations involved in varying degrees. In many respects, the task of megaproject management is a task centered around the effective management of the interfaces. The interfaces are opportunities for conflicts and misunderstandings to occur. They are the places where things tend to 'fall between the cracks'.

(p. 184) 

The book also talks about when Alan Mulally's team designing the Boeing 777 in the 1990s. Wikipedia claims he had as many as 240 concurrent design teams, each with 40 members. That was too many to be coordinated just through his head, so instead he implemented a structure where subteams would speak a lot to each other. Each team would have a dedicated "integration manager", whose job was just to run around and track how that team's changes would affect other teams, and escalate to a cross-team meeting if things were too incompatible. Big boss Mulally referred to himself as "chief integrator".  (p. 195-196) (Overall, I'm not sure if this massive org structure is very efficient, or just Boeing bloat...) 

kave

I'm fine to publish! Interesting latest response. Hope to respond sometime

kave

I was recently musing on two bits of programming lore that I thought you might find interesting.

The first is Conway’s Law: “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure”. I might split it into a strong form and a weak form. The strong form is quite literal about “a copy of the […] structure”. In particular, it would claim that the design’s structure would be unresponsive to the territory and just to the structure. The weak form would claim that the correspondence is something more like a homomorphism: the design’s structure is a coarse- or fine-graining of the communication structure.

I think the weak form pretty directly says “it’s nice to have the low-bandwidth communication channels line up with the decoupled elements of the domain”.

The second is Steve Yegge’s rant on platforms. I don’t have time to reread the whole thing now, but I seem to recall it essentially arguing for limiting the communication channels between aspects of Amazon’s products to be only those available to consumers.

I think the main idea here was that it strengthens the communication channels available to non-Amazon consumers, who are also part of the graph of interfaces. Another idea, I think, is to try and scale the attention that happens inside automation, rather than people, so that you can have more “slime mould body” over the organisation, and not just thin connections.

New Comment
2 comments, sorted by Click to highlight new comments since:

This discussion is an excellent instance of a pattern I see often, which I should write a post on at some point.

  • Person 1: Seems like the only way to <do useful thing> is carve up the problem along low-dimensional interfaces.
  • Person 2: But in practice, when people try to do that, they pick carvings which don't work, and then try to shoehorn things into their carvings, and then everything is terrible.
  • Resolution (which most such discussions don't actually reach, good job guys): You Don't Get To Choose The Ontology. The low-dimensional interfaces are determined by the problem domain; if at some point someone "picks" a carving, then they've shot themselves in the foot. Also, it takes effort to discover the carvings of a problem domain.

Another mildly-hot-take example: the bitter lesson. The way I view the bitter lesson is:

  • A bunch of AI researchers tried to hand-code ontologies. They mostly picked how to carve things, and didn't do the work to discover natural carvings.
  • That failed. Eventually some folks figured out how to do brute-force optimization in such a way that the optimized systems would "discover" natural ontologies for themselves, but not in a way where the natural ontologies are made externally-visible to humans (alas).
  • The ideal path would be to figure out how to make the natural ontological divisions visible to humans.

(I think most people today interpret the bitter lesson as something like "brute force scaling beats clever design", whereas I think the original essay reads closer to my interpretation above, and I think the history of ML also better supports my interpretation above.)

For another class of examples, I'd point to the whole "high modernism" thing.

I revisited the Bitter Lesson essay to see if it indeed matches your interpretation. I think it basically does, up to some uncertainty about whether "ontologies" and "approximations" are quite the same kind of thing.

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.