Thank you for this post, Kyoung-Cheol. I like how you have used Deep Mind's recent work to motivate the discussion of the consideration of "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy."
I think there is some interesting work on this forum that captures these same types of ideas, sometimes with similar language, and sometimes with slightly different language.
In particular, you may find the recent post from Andrew Critch on "Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI" to sympathetic to core pieces of your argument here.
It also looks like Kaj Sotala is having some similar thoughts on adjustments to game theory approaches that I think you would find interesting.
I wanted to share with you an idea that remains incomplete, but I think there is an interesting connection between Kaj Sotala's discussion of non-agent and multi-agent models of the mind and Andrew Critch's robust agent-agnostic processes that connects with your ideas here and the general points I make in the IBS post.
Okay, finally, I had been looking for the most succinct quote from Herbert Simon's description of complexity and I found it. At some point, I plan to elaborate more on how this connects to control challenges more generally as well, but I'd say that we would both likely agree with Simon's central claim in the final chapter of The Sciences of the Artificial:
"Thus my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses."
Glad you decided to join the conversation here. There are lots of fascinating conversation that are directly related to a lot of the topics we discuss together.
Thank you very much for your valuable comments, Dr. Bullock!
I agree that exploring various viewpoints and finding similarities and discrepancies can be crucial for advancing the philosophy of science and improving our understanding of complex systems like AI and organizations. Your approach of considering the development of AI and its utilization within the configurations of societal works, lying somewhere between centralization and game theory situations, is indeed a nuanced and well-considered perspective. It acknowledges the complexity and discretion that hierarchical systems can have while incorporating game theory's relevance in multi-agent systems.
Considering organizational frameworks in the context of AI-human interactions is essential, as it sheds light on how we can effectively work with AI agents in such systems. The concept of authority, being a cognitive phenomenon for humans, is indeed distinct from how machines perceive and handle information to process complexity.
I share your belief that organization theories have significant potential in contributing to these discussions and becoming crucial for governance experts. It's exciting to see how interdisciplinary perspectives can enrich our understanding of AI development and utilization. I look forward to further engaging with your ideas and seeing more valuable contributions from you in the future!
Application of Game Theory to AI Development and Utilization
A recent research post titled "Game Theory as an Engine for Large-Scale Data Analysis" by a Google team (McWilliamson et al. 2021) provides a tremendously helpful viewpoint for thinking about AI development. In doing so, by adopting a multi-agent approach, it reveals significant connectivity between theories in AI and those in social sciences. With regards to addressing operations of agents, that is, the application of game theory from economics conceptually works well in this approach case. However, the game theory application has limitations from the perspective of organization study and public administration. That being said, it becomes necessary to additionally consider characteristic organizational perspectives that deal with decision-making and execution involving two or more individual agents striving to achieve a shared goal in systematic ways.
Science of Bureaucracy with Integration of Novel Decision-Making Agent
Bureaucracy as Systematization of Hierarchical and Horizontal Flows of Authority in Achieving Goals
To be specific, the term "systematic ways" not only involves 'equally' horizontal relationships among agents but also hierarchical relationships among them. In sizable 'organizational' contexts dealing with complex problems, the organizational form and function tend to be bureaucratic to achieve efficiency. As Herbert Simon acknowledged (1946), in connection to Max Weber's core concept of bureaucracy, two scientific principles are recognized: 1) hierarchical transfer of rational authority - spreading it out to each rank horizontally, and 2) job specialization (Waters & Waters 2015). These principles are considered universally applicable to some degree, with interactions with the environment. In so doing, the performance of specialized jobs and the connections of locus in achieving tasks are fundamentally grounded in using rules.
Remaining Core Aspects of Bureaucracy with the Intervention of AI
Relatedly, it becomes crucial to understand how organizational structure and functioning, involving humans, could be differentiated, or even further, whether organizations can be sustained in the face of exponentially increasing capabilities of a single AI agent, leading to superintelligence, and the potential activation of Artificial General Intelligence (AGI). In light of the context of AI intervention in bureaucracy, Bullock and Kim (2020) argue that specialization of jobs among humans and AIs through a multi-agent system can be viewed in terms of comparative advantage, considering that AI may have limited capabilities for decision-making (bounded rationality). Therefore, humans could still maintain comparative superiority in certain task accomplishments, particularly in areas with different complexities and uncertainties related to task and environment (Bullock 2019).
As an ultimate point, moreover, unless a single entity can process every piece of information in a perfectly complete and simultaneous manner, organizations will need to be maintained (Bullock & Kim 2020). Under the current physical laws, a single entity is limited in efficiently solving all complex and imminent problems and achieving goals; thus, collaboration with other entities becomes necessary. Each entity takes on different specialized tasks, eventually controlled to fit the achievement of shared goals through the dynamics of authority.
Moving forward, the systematization of the work of agents demonstrates the realization of a higher-level capability of decision-making based on collective intelligence within organizational phenomena. Each locus in the organization needs to communicate with each other, and the systematization makes the communication also fall into the area of processes through vertical and horizontal flows of authority, in addition to coordination of those flows by the functioning of managerial positions, in principle. All of these components eventually form bureaucracy in terms of integrated structuring and functioning. Meanwhile, the initiation and modification of task assignments may be settled through bottom-up and/or top-down approaches, depending on the internal/external circumstances.
Grounded in the nature of bureaucratization, the process of setting goals by top authorities and systematically designating tasks to sub-levels in an organized manner would persist even with substantive intervention of AI (Bullock & Kim 2020). To that extent, the issue of control at the very top position or critical locus for task accomplishments, which can have significant impacts on administration and policy implementation (e.g., securing and operating unmanned missile systems already utilized by the US military), becomes crucial (Kim & Bullock 2021). This becomes even more critical in the context of increased system-level integration of organizations with interventions of AI (Bullock et al. 2020; Meijer et al. 2021) as exemplified by the Joint All-Domain Command and Control (JADC2) system of the US Department of Defense (DOD), which incorporates ground-air (space)-sea-based operation systems with AI in specialized task positions, even involving controlling functions as observed in the plan for the US Navy submarine operation system using AI.
In the context of the US, with exponentially developing natural language-based GPT-3 machine learning technology, primarily driven by the US General Services Administration (GSA) leading the US AI utilization strategy in government and governance, we are witnessing a more fundamental transformation overall. In doing so, the bureaucratic mechanism is highly likely to persist, as argued, and thus we need to seriously consider it in the development and utilization of AI as an agent.
Critical Viewpoint on Game Theory from Organization Theories (The Matter of Hierarchical Relationship)
A Shortcoming of the Highly Valuable Application of Game Theory to AI Development and Usage
Returning to the research by the Google team (McWilliamson et al. 2021), the issue at hand is that game theory does not fully capture the characteristic feature of organizations: hierarchical relationships and control. That is, it is fundamentally limited in reflecting the hierarchical and systematically unequal relationships among agents, thereby restricting its broader applicability.
To note, the Institutional Analysis and Development (IAD) framework, built on the works of Elinore Ostrom, offers a fundamental viewpoint for investigating the interactional behavior of institutional actors dealing with rules (and norms) in institutional environments. However, while it recognizes the fundamentality of authority, it is primarily limited in providing a specific blueprint of how collective actions arise in centralized governments and organizations at the meso-level, beyond its excellent applicability in community-level environments. This partly led to the development of governance theories, which are somewhat distinct (Hill & Hupe 2014).
In the Google research (McWilliamson et al. 2021), we gain valuable insights into the phenomena of hierarchical relationships by observing that agents' performance can be maximized through a perpendicular vector relationship. The matter of hierarchical positions of functional (sub-) agents and their interactions can be critical for the Google research to achieve the best efficiency. Nonetheless, in principle, the suggested simultaneous revision of rules among equal agents may not always be available in the organizationally hierarchical context of authority when the system is maintained in the mode. Importantly, since an AI system is inherently a computational machine, I consider its fundamental operational characteristics generally remain while it can function as a sub-level or hyper-level agent in varying scopes.
Meanwhile, the research by a Google team suggests another interesting view of the continuum of Utility (Optimization), Multiple Utilities (Multi-agent/Game Theory), and Utility-Free (Hebbian/Dynamical System) (McWilliamson et al. 2021). In the context of organization studies, these can be similarly understood as perfect centralization (Ideal-type Bureaucracy by Weber) (Bullock 2021); activation of discretion (Realistic Bureaucracy); and perfect decentralization (Egalitarianism).
That being said, the individual-level approach and the systematic-level perspective do not match. By the team's finding, perpendicular relationships between agents can be more efficiently functioning; yet, the Multi-agent/Game Theory can plausibly be the most valid conceptual approach for AI development. However, critically, the game theory approach may be restricted in fully reflecting the characteristic feature of hierarchical relationships.
Meanwhile, it can be an interesting question how the game theory application itself can be differentiated between the case of humans, AI, and their co-working (Bullock and Kim 2020), further dealing with not only factual but also, e.g., ethical value issues as manifested or compromised including the matter of administrative evil, which are substantially critical for humans, when it comes to having interactions between agents (Young et al. 2021; Kim & Bullock 2021).
The Matter of Discretion for Agents (Existing Everywhere and Always in Organizational Contexts)
Another meaningful point to consider when reflecting organization theory on the development and application of AI is discretion. Many AI scholars perceive AI as a completely value-neutral computer by nature, assuming that 'subjective' discretion is not a concern for AI but rather for human bureaucrats. However, discretion can apply to every agent, whether human or AI, in organizational contexts unless it is completely controlled by the top authority or a single entity performing all necessary tasks—a condition that may not be feasible under current physical laws and would violate the foundational principle of organizational establishment (Bullock & Kim 2020).
Even at a contemporary technological level, rules (including laws, as more cohesive conjunctions through the rule of law, reaching up to the constitution, especially in the public sector) in use within any system cannot specify all required processes for every situation, thereby necessitating individual agents to make 'own' decisions at each point. Moreover, due to constraints in time and resources, certain positions may hold ultimate authority in lieu of the top position through internal delegation.
Along with the matter of hierarchical authority, these aspects can add significant nuance to the realization of game theory application in AI development and utilization. Discretion comes into play in applications of the hierarchy, leading to the loss of perfect control over agents. When reflecting the unequal vertical relationship among agents under these conditions, the discretionary capabilities of agents should be taken into consideration as well.
Conclusion: More Nuanced Conditioning of Game Theory for Better Applicability in AI Development and Utilization in Reality
Assuming that the control condition is secured, the most direct and critical implication of the Google team's research (McWilliamson et al. 2021), based on our discussion, is that agents in a bureaucratic structure cannot always freely adjust their rules. Meanwhile, in the scope of individual interactions, it was argued that the perpendicular relationship between agents was for the most efficient functioning. Hence, substantially, there are remaining puzzles to integrate the individual-level phenomena into this system-level phenomena. The matter of discretion may hint, but there is much to be explored.
After all, I would like to suggest incorporating nuanced hierarchical characteristics into the conceptualization of AI development and its actual application. I acknowledge that this may be challenging and would likely require a fundamental theoretical breakthrough. However, applying organizational and political theories to cover various contexts could prove immensely valuable, even to the extent of AI research, which holds crucial implications for society.
Reference (Including ones that a link is not available)
Bullock, J. B. (2021). Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS). LessWrong. https://www.lesswrong.com/posts/iekoEYDLgC7efzbBv/controlling-intelligent-agents-the-only-way-we-know-how
Bullock, J. B. (2019). Artificial Intelligence, Discretion, and Bureaucracy. The American Review of Public Administration, 49(7), 751–761. https://doi.org/10.1177/0275074019856123
Bullock, J. B., & Kim, K. (2020). Creation of Artificial Bureaucrats. Proceedings of European Conference on the Impact of Artificial Intelligence and Robotics. https://www.researchgate.net/publication/349776088_Creation_of_Artificial_Bureaucrats
Bullock, J. B., Young, M. M., & Wang, Y. F. (2020). Artificial intelligence, bureaucratic form, and discretion in public service. Information Polity, 25(4), 491–506. https://doi.org/10.3233/IP-200223
Critch, A. (2021). What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs). LessWrong. https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic
Hill, M., & Hupe, P. (2014). Implementing Public Policy: An Introduction to the Study of Operational Governance (Third edition). SAGE Publications Ltd.
McWilliams, B., Gemp, I., & Vernade, C. (2021). Game theory as an engine for large-scale data analysis: EigenGame maps out a new approach to solve fundamental ML problems. DeepMind. https://deepmind.com/blog/article/EigenGame
Kim, K., & Bullock, J. B. (2021). Machine Intelligence, Bureaucracy, and Human Control. Perspectives on Public Management and Governance Special Issue Reappraising Bureaucracy in the 21st Century (Accepted for the Final Round).
Meijer, A., Lorenz, L., & Wessels, M. (2021). Algorithmization of Bureaucratic Organizations: Using a Practice Lens to Study How Context Shapes Predictive Policing Systems. Public Administration Review, puar.13391. https://doi.org/10.1111/puar.13391
Simon, H. A. (1946). The Proverbs of Administration. In Classics of Public Administration (7th ed.). Wadsworth/Cengage Learning.
Waters, T., & Waters, D. (Eds.). (2015). Weber’s Rationalism and Modern Society. Palgrave Macmillan US. https://doi.org/10.1057/9781137365866
Young, M. M., Himmelreich, J., Bullock, J. B., & Kim, K. (2021). Artificial Intelligence and Administrative Evil. Perspectives on Public Management and Governance, gvab006. https://doi.org/10.1093/ppmgov/gvab006