Content note: This is a collection/expansion of stuff I've previously posted about elsewhere. I've gathered it here because it's semi-related to Eliezer's recent posts. It's not meant to be a response to the "inadequacy" toolbox or a claim to ownership of any particular idea, but only one more perspective people may find useful as they're thinking about these things.
Economics has a rich language for talking about market failures. Situations involving things like externalities, asymmetric information, public goods, or principal-agent problems can all result in room for improvement. In practice these situations haven't turned out to be fully tractable, but they're broadly recognized for what they are. We can at least talk theoretically about what's wrong and how to fix it in a market-failure framework, and sometimes we can use the question "how does this deviate from this idealized mechanism (which we believe brings about the best outcomes)?" to guide effective interventions. But not all failures are market failures—not even all failures concerning allocation of goods and services, broadly construed.
Within organizations, we may not expect or want allocation to be governed by price-market mechanisms. For example, firms should do things in-house when they have a strong internal hierarchy and transaction costs are high externally. "Organizational failure" can be a framework for thinking about when that goes wrong due to poor assumptions about the "ideality" of the organization.
And when a system necessarily involves non-hierarchical social (non-market) relationships, especially if there's a continuous search for information and a high need for trust, you want to think about what networks should be doing but aren't.
From The Anatomy of Network Failure (Schrank & Whitford 2011):
The authors continue:
When do you want to think in terms of market failures? When the conditions for market governance are closer to ideal:
In terms of organizations?
And networks:
Let's try an example: imagine we're researchers in different branches of a fast-moving field. (For the sake of simplicity, assume we have the same common good as our goal.) Some possible problems:
All the information and means and will is out there—someone with a god's eye view or a mind-reading librarian could make things better for everyone by making the right introductions or linking the right papers. An ideal network in this case is, roughly, the one which solves all these problems as efficiently as possible. Everyone sees all the information they should to the depth that's worth it, they take it just as seriously as they should, they know the people they should know, they trust the people they should trust, all with as small cost as possible to people sharing and qualifying their claims; you can't distribute information or manage trust better without making the increased overall burden not worth it. But the obstacles are very fiddly.
Existing social institutions get us part of the way: academic journals, peer review, and a broad system of "prestige" let us share certain kinds of information with a certain amount of confidence that it's correct and an uncertain signal of attention-worthiness. But since our field is moving so fast, we have a lot of tacit knowledge and unknown unknowns about. (Fast-moving isn't really necessary for this.) Very little has been accessibly codified, so our work isn't much good to outsiders unless they have inroads to our network. And it's hard to establish trust, especially about less formalized stuff, in such a mess.
As with a market failure, it can be a good place for centralized intervention. Especially when it's too expensive to meddle with the market, you might want to consider making the network more efficient. Some science funders have come to see themselves as doing this more as they find themselves without the money to solve market failures by just buying more research. (I'm getting this in large part from accounts of the microelectronics industry as well as some of nanotechnology research, including among others The Long Arm of Moore's Law by Cyrus Mody, which accounts mesh well with my experience on the research edge of those fields.) Attempting to address network failures isn't always effective, but it tends to align better with a sociologically realistic model of how researchers work than the market-intervention perspective does.
Some interventions are going to be directly on incentives keeping individuals from communicating ideally, but sometimes it can be easier to act on network structure/function itself. So, for example, new conferences and professional organizations spurred by those with broader perspective have been effective in getting the right people talking to one another. "Para-scientific media" (short of journals but beyond pop science and more like "trade magazines", e.g. Physics Today) let people know what others are doing broadly even if we wouldn't normally read their papers. Gordon Research Conferences are "off the record" to encourage more open discussion, including sharing of unpublished work. Individual program officers can also have the perspective and connections to be influential here. Flow between academia and industry is an important lever. Various open-science and alt-metrics initiatives can also be viewed in this light, perhaps acting more directly on incentives and having more of a market flavor.
(In a sense, the framework conflates problems related to information distribution and problems related to social relationships by treating information as social. This is intentional, though more applicable in some places than in others. When we share information, it's tagged implicitly or otherwise with things like how reliable [the sharer thinks] it is, how much attention it should be given, and what one is meant to do with it. These are social qualities, and a network of that fails to measure these things out appropriately is failing as a network—at least as badly as one that just doesn't spread enough information around at all.)
Is this just a framework for analyzing failures after the fact, or can it be used to generate new ideas or interventions? I guess it depends what you're trying to apply it to. The more heavily your system relies on distributing information and establishing trust in a way that can't be gotten from prices/markets or hierarchy/authority, the more fruitful this perspective should be. There's already some network, formal or otherwise, governing your system, but it's not ideal; what deviations from [or hidden assumptions about] ideality of that network are bottlenecking its efficiency?
If there's interest, I have a couple more concrete analyses in mind, but my motivation to write this has stalled, and it might be better to get some feedback now anyway. (Or to hear examples of your own, or examples where this is all useless.)