Hi Max, I would be curious to know how DMDU applied to AI governance contrasts with MTAIR (summary report). My initial guess was that DMDU is more promising in handling deep uncertainty and coming up with robust policy recommendations, but on second thought (my outsider's impression of) MTAIR is also aiming at doing the same; I'm thinking in particular of this section:
Quantification and decision analysis: Our longer-term plan is to convert our hypothesis map into a quantitative model that can be used to calculate decision-relevant probability estimates. For example, a completed model could output a roughly estimated probability of transformative AI arriving by a given date, a given catastrophe scenario materializing, or a given approach successfully preventing a catastrophe.
The basic idea is to take any available data, along with probability estimates or structural beliefs elicited from relevant experts (which users can modify or replace with their own estimates as desired). Once this model is fully implemented, we can then calculate probability estimates for downstream nodes of interest via Monte Carlo, based either on a subset or a weighted average of expert opinions, or using specific claims about the structure or quantities of interest, or a combination of the above. Finally, even if the outputs are not accepted, we can use the indicative values as inputs for a variety of analysis tools or formal decision-making techniques. For example, we might consider the choice to pursue a given alignment strategy, and use the model as an aid to think about how the payoff of investments changes if we believe hardware progress will accelerate or if we presume that there is relatively more existential risk from nearer-term failures.
The reason I'm cautiously excited about your work so far (and this post in particular) is that, aside from MTAIR, it's the closest thing I've found to sketching a potential answer to my question from awhile back, especially your discussion here, at least as I understand it (to be clear, I don't work on any of this, so my question might simply be misguided -- I'm simply interested in how systematic methods for decision analysis might help us make better decisions in high-stakes high-uncertainty complex systems scenarios).
This article is part of a series on Decision-Making under Deep Uncertainty (DMDU). Building on an introduction to complexity modeling and DMDU, this article focuses on the application of this methodology to the field of AI governance. If you haven't read the previous article, I strongly recommend pausing reading this article and going through the introduction first. Many aspects will only appear clear if we share the same basic terminology. I might be biased, but I think, it's worth it.
Why this article?
I have written up this article because of two reasons. The 1st reason is that various people have approached me, asking how I envision the application of the introduced methodology to particular EA cause areas. People want to see details. And understandably so. In my last article, I mentioned a bunch of EA cause areas, that I consider to be well-suited for systems modeling and decision-making under deep uncertainty (DMDU)[1]. This article is intended to be a (partial) answer for these people providing details on one particular cause area. The 2nd reason for writing this article is my own specialization in AI governance and my wish for people to get started working on this. I want to show a more practical and concrete side of this methodology. I also consider this article as a handy reference that I can share with interested parties as a form of coarse research proposal.
Summary
Basics of DMDU
If it has been some time since you read the previous article, or you are already somewhat familiar with systems modeling and/or DMDU, here is a short summary of the previous article:
Now that we are all caught up, let's jump right into it.
Why apply DMDU to AI governance?
As most people that are familiar with Effective Altruism, are also familiar with the arguments why mitigating AI risk deserves particular attention, we will skip listing all the arguments showing how advanced AI systems could pose great risks to society now and in the future. When I speak of mitigating risks from AI, I refer to transformative AI (TAI), artificial general intelligence (AGI), and artificial superintelligence (ASI). At this point, I assume that the reader is on board and we agree on AI governance being an important cause area. But now, let's untangle the intellectual ball of yarn, weaving together the threads of DMDU (Deep Uncertainty Decision Making) and AI Governance in a way that illuminates its profound necessity. Let's break it down into four points.
1. Dealing with a Socio-Technical System
AI governance, at its heart, is a socio-technical system. This signifies that it's not merely about the development and regulation of technological artifacts; it's a nuanced interface where technology and society meet, intertwine, and co-evolve. It involves the navigation of societal norms and values, human behaviors and decisions, regulatory and policy environments, and the whirlwind of technological advancements. This results in a system that's akin to a vast, interconnected network, buzzing with activity and interplay among numerous components.
Each of these components brings its own layer of complexity. For instance, the human element introduces factors like cognitive biases, ethical considerations, and unpredictable behaviors, making it challenging to model and predict outcomes. Technological advancements, meanwhile, have their own dynamism, often progressing at a pace that leaves legal and regulatory frameworks struggling to catch up. These complexities aren't simply additive; they interact in ways that amplify the overall uncertainty and dynamism of the system.
DMDU is particularly equipped to grapple with this complexity. It recognizes that simplistic, deterministic models cannot accurately capture the dynamism of socio-technical systems. DMDU instead encourages a shift in mindset, embracing uncertainty and complexity instead of trying to eliminate it. By focusing on a wide array of plausible futures, it acknowledges that there isn't a single "most likely" future to plan for, but rather a range of possibilities that we must prepare for.
Moreover, DMDU embraces the systems thinking paradigm, understanding that the system's components are not isolated but interconnected in intricate ways. It acknowledges the cascading effects a change in one part of the system could have on others. This holistic view aligns perfectly with the complexity of AI governance, allowing for the development of more comprehensive and nuanced strategies.
Finally, DMDU's iterative nature also mirrors the dynamic nature of socio-technical systems. It recognizes that decision-making isn't a one-time event but a continuous process. As new information becomes available and the system evolves, strategies are revisited and adjusted, promoting a cycle of learning and adaptation.
In essence, DMDU is like a skilled navigator for the labyrinthine realm of AI governance, providing us with the tools and mindset to delve into its complexity, understand its intricacies, and navigate its uncertainties. It encourages us to look at the socio-technical system of AI governance not as an intimidating tangle, but as a fascinating puzzle, rich with challenges and ripe with opportunities for effective, inclusive, and adaptable governance strategies.
2. Valuing the Multiplicity of Stakeholder Perspectives
In the sphere of AI governance, we encounter an expansive array of stakeholders. Each brings to the table their own set of values, perspectives, and objectives, much like individual musicians in an orchestra, each playing their own instrument yet contributing to a harmonious symphony. These stakeholders range from government bodies and policy-makers, private corporations and AI developers, to individual users and civil society organizations. Each group's interests, concerns, and visions for AI development and regulation differ, contributing to a rich tapestry of diverse values.
For instance, a tech company might prioritize innovation and profitability, while a government agency could be more concerned with maintaining security, privacy, and the public good. Meanwhile, civil society organizations might stress human rights and social justice issues related to AI. Individual users, on the other hand, could have myriad concerns, from ease of use and affordability to ethical considerations and privacy protections. This diversity of values is not a complication to be minimized but a resource to be embraced. It brings to light a wider range of considerations and potential outcomes that may otherwise be overlooked. However, navigating this diversity and reaching decisions that respect and incorporate this plethora of perspectives is no small feat.
This is where DMDU plays an instrumental role. It offers a methodology designed to address situations with multiple objectives and conflicting interests. DMDU encourages the consideration of a broad spectrum of perspectives and criteria in decision-making processes, thereby ensuring that various stakeholder values are included and assessed. One way it does this is through scenario planning, which allows the exploration of a variety of future states, each influenced by different stakeholder values and objectives. This exploration enables decision-makers to assess how different courses of action could impact diverse interests and fosters more comprehensive, balanced policy outcomes.
Furthermore, DMDU promotes an iterative decision-making process. This is especially pertinent in the context of value diversity as it provides opportunities to reassess and refine decisions as the values and objectives of stakeholders evolve over time. This ongoing engagement facilitates continued dialogue among diverse stakeholders, fostering a sense of inclusivity and mutual respect.
In essence, by utilizing DMDU methodology, we are not merely acknowledging the existence of diverse values in AI governance but actively inviting them to shape and influence the decision-making process. It's like conducting an orchestra with a wide range of instruments, ensuring that each has its solo, and all contribute to the final symphony. The resulting policies are, therefore, more likely to be nuanced, balanced, and representative of the diverse interests inherent in AI governance.
3. Embracing Deep Uncertainty
The field of AI governance is a veritable sea of deep uncertainty. As we look out upon this vast expanse, we are faced with a multitude of unknowns. The pace and trajectory of AI advancements, societal reactions to these developments, the impact of AI on various sectors of society, and the potential emergence of unforeseen risks – all these elements contribute to a fog of uncertainty.
In many ways, AI governance is like exploring uncharted waters. We can forecast trends based on current knowledge and patterns, but there remains a substantial range of unknown factors. The behavior of complex socio-technical systems like AI is difficult to predict precisely due to their inherent dynamism and the interplay of multiple variables. In this context, conventional decision-making strategies, which often rely on predictions and probabilities, fall short.
This is where DMDU shines as an approach tailored to address situations of deep uncertainty. Instead of attempting to predict the future, DMDU encourages the exploration of a wide range of plausible futures. By considering various scenarios, including those at the extreme ends of possibility, DMDU helps decision-makers understand the breadth of potential outcomes and the variety of pathways that could lead to them.
This scenario-based approach encourages the development of flexible and adaptive strategies. It acknowledges that given the uncertainty inherent in AI governance, it is crucial to have plans that can be modified based on changing circumstances and new information. This resilience is vital for navigating an uncertain landscape, much like a ship that can adjust its course based on shifting winds and currents.
In sum, the application of DMDU in AI governance acknowledges the presence of deep uncertainty, embraces it, and uses it as a guide for informed decision-making. It equips us with a compass and map, not to predict the future, but to understand the potential landscapes that may emerge and to prepare for a diverse range of them. By doing so, it turns the challenge of uncertainty into an opportunity for resilience and adaptability, offering us glimpses through the thick fog.
4. Ensuring Robustness
AI technology is not static; it evolves, and it does so rapidly. It's akin to a river that constantly changes its course, with new tributaries emerging and old ones disappearing over time. As AI continues to advance and permeate various sectors of society, the landscape of AI governance will similarly need to adapt and evolve. This creates an imperative for robustness in policy solutions – policies that can withstand the test of time and adapt to changing circumstances.
Robustness in this context extends beyond mere resilience. It's not just about weathering the storm, but also about being able to sail through it effectively, adjusting the sails as needed. Robust policies should not only resist shocks and disruptions but also adapt and evolve with them. They should provide a solid foundation, while also maintaining enough flexibility to adapt to the dynamic AI landscape.
DMDU offers a methodology that directly supports the development of such robust policy solutions. It focuses on the identification of strategies that perform well under a wide variety of future scenarios, rather than those optimized for a single, predicted future. This breadth of consideration promotes the creation of policies that can handle a range of potential scenarios, effectively enhancing their robustness.
DMDU also supports adaptive policy-making through its iterative decision-making process. It acknowledges that as the future unfolds and new information becomes available, strategies may need to be reassessed and adjusted. This iterative process mirrors the dynamism of the AI field, ensuring that policies remain relevant and effective over time.
Moreover, DMDU's inclusive approach to decision-making, which considers a wide range of stakeholder perspectives and values, further contributes to policy robustness. By encompassing diverse perspectives, it enables the development of policies that are not only resilient but also respectful of different values and interests. This inclusivity aids in building policy solutions that are more likely to garner widespread support, further enhancing their robustness.
In essence, DMDU equips us with the tools and mindset to develop robust policy solutions for AI governance. Like a master architect, it guides us in designing structures that are sturdy yet adaptable, with a strong foundation and the flexibility to accommodate change. Through the lens of DMDU, we can build policy solutions that stand firm in the face of uncertainty and change, while also evolving alongside the dynamic landscape of AI.
To sum up, incorporating DMDU methodology into AI governance is a bit like giving a mountaineer a reliable compass, a map, and a sturdy pair of boots. It equips us to traverse complex terrain, cater to varied travelers, weather unexpected storms, and reach the peak with policies that are not only robust but also responsive to the ever-changing AI environment.
How to apply DMDU to AI Governance?
Having taken an expansive view of the myriad reasons to pair DMDU with AI governance—from dealing with its complexity, valuing the multiplicity of stakeholder perspectives, embracing deep uncertainty, to ensuring robustness in our policy responses—we find ourselves standing at the foot of a significant question. As described above, this approach seems to provide us with the essential tools for our journey, analogous to a mountaineer. These resources position us to expertly navigate the convoluted socio-technical terrain, accommodate a wide range of traveling companions with their diverse priorities, withstand the unforeseeable tempests, and ultimately arrive at the peak armed with policies that are both robust and responsive to the ever-evolving AI environment.
But now that we (hopefully) appreciate the necessity of this methodology and understand its benefits, the path forward leads us to a crucial juncture. This exploration prompts an essential and practical inquiry: "How do we apply DMDU to AI Governance?" Now is the time to delve into the concrete ways we might integrate this methodology into the practice of governing AI, turning these theoretical insights into actionable strategies.
How do we approach this question best? I would suggest, we consider the following three steps:
1. Model Building
The process of applying DMDU to AI governance necessitates a more detailed and nuanced approach, starting with a clear identification of the problem we wish to address. AI governance, in its essence, is an intricate socio-technical system with numerous facets and challenges. As discussed in the previous article on this sequence, an agent-based model is an excellent choice with respect to modeling paradigms.
Given the heterogeneous nature of the agents involved in AI governance – from governments, private corporations, and civil society organizations to individual users – an agent-based modeling (ABM) approach can serve as an ideal tool for this task. As described in the previous article of this sequence, ABM is a computational method that enables a collection of autonomous entities or "agents" to interact within a defined environment. Each agent can represent a different stakeholder, equipped with its own set of rules, behaviors, and objectives. By setting up the interactions among these agents, we can simulate various scenarios and observe emergent behavior. The strength of ABM lies in its ability to represent diverse and complex interactions within a system. In the context of AI governance, it can help illuminate how different stakeholders' behaviors and decisions may interact and influence the system's overall dynamics. This allows us to simulate a multitude of scenarios and observe the consequences, providing us with valuable insights into potential policy impacts and helping us design robust, adaptable strategies.
Creating an agent-based model (ABM) usually involves a series of methodological steps. Here's a quick overview:
Experimentation: Once our model is calibrated, we can start running experiments, altering parameters or rules, and observing the resulting behaviors. This can help us identify emergent phenomena, test different scenarios, and gain insights into system dynamics.Model Analysis and Interpretation: Finally, we analyze the results of our simulations, interpret their meaning, and translate these insights into actionable policy recommendations or strategies.I included steps 5 and 6 for completeness' sake. However, given that we will use DMDU on top of the ABM, our steps will diverge.
I admit, that this is still rather abstract. So, let's consider two examples for which we tease the application of the aforementioned methodological steps. The two examples are the following:
Disclaimer: Keep in mind that this is a very rough and preliminary attempt of laying out what the modeling process could look like. More serious thought would need to go into fleshing out such model conceptualizations.
Example 1: Fairness in AI
Let's consider a hypothetical problem – ensuring fairness in AI systems across diverse demographic groups. Given the complexity and diversity inherent in this problem, ABM can be a particularly helpful tool to explore potential solutions.
Experimentation: We run simulations to explore various scenarios. For instance, what happens to fairness when regulatory bodies tighten or loosen policies? What is the impact of different fairness definitions on the experience of diverse users?Model Analysis and Interpretation: We analyze the results of our simulations to understand the emergent phenomena and interpret these insights. Based on our findings, we may propose strategies or policies to improve fairness in AI systems.Through this application of ABM, we can explore complex, diverse, and uncertain aspects of AI governance and support the design of robust, adaptable policies.
Example 2: Existential Risk of AI
Let's take a look at how to use agent-based modeling to address the existential risk posed by AI.
Experimentation: We run a series of experiments where we adjust parameters and observe the outcomes. For example, we might examine a worst-case scenario where the development of superintelligent AI occurs rapidly and without effective safeguards or regulations. We could also model best-case scenarios where international cooperation leads to stringent regulations and careful, safety-conscious AI development.Model Analysis and Interpretation: Finally, we scrutinize the results of the experiments to identify patterns and gain insights. This might involve identifying conditions that lead to riskier outcomes or pinpointing strategies that effectively mitigate the existential risk.By applying this methodology, we can use ABM as a tool to explore various scenarios and inform strategies that address the existential risk posed by AI. It should be noted, however, that the existential risk from AI is a challenging and multifaceted issue, and ABM is just one tool among many that can help us understand and navigate this complex landscape.
Steps 5 and 6 in both examples are just there to show you what a typical ABM process could look like. However, we would likely stop at step 4 and go down a different path – the path of DMDU which is described in the consequent section.
2. Embedding of Model in Optimization Setup
Eventually, we want to use the model to identify good policy recommendations. Usually, modelers run their own experiments, playing with their parameters, and particular actions, handcraft some scenarios, calculate expected utilities, and provide policy recommendations based and weighted averages, etc. This can be very problematic as elaborated previously. The DMDU way offers an alternative. We can embrace uncertainty and find optimal and robust policy solutions – in a systematic way, considering ten of thousands of scenarios, using AI and ML.
In order to use the AI part of DMDU, we need to briefly recap how model information can be structured. As described in my previous article, the inputs and outputs can be structured with the XLRM framework[2].
Keep in mind that the metrics are describing a set of variables that you (or rather your stakeholders) find particularly important to track. With the variables, we also provide the optimization direction (e.g., minimize inequality, maximize GWP, minimize the number of causalities, etc.).
In order to embed the model in an optimization setup, we need to specify X, L, R, and M first. We need to provide the variables and their plausible domains. Examples will be described later. But first, I would like to talk about the optimization process and how it differs from the simulation process.
Simulation versus Optimization
In agent-based modeling, simulation and optimization operate as two distinct but interconnected components, each serving a unique role in the decision-making process. However, they differ substantially in terms of their focal point and methodological approach, particularly concerning the flow of data from inputs to outputs or vice versa.
Simulation, in the context of agent-based modeling, is typically seen as a process-driven approach where the key concern lies in the understanding and elucidation of complex system behavior. This begins with defining the inputs, a mixture of policy levers and exogenous uncertainties. By manipulating these variables and initiating the simulation, the system's states are allowed to evolve over time according to pre-established rules defined by the relations (R). The emergent outcome metrics are then observed and analyzed. The direction of interest in simulation is essentially from inputs to outputs – from the known or controllable aspects to the resultant outcomes.
Optimization, on the other hand, operates in a seemingly inverted manner. This process begins by identifying the metrics of interest – the desired outcomes – and proceeds by employing computational techniques, often involving AI algorithms, to find the optimal combination of policy levers that achieve these goals under particular scenarios. Here, the direction of attention is from outputs back to inputs. We begin with a predetermined end state and traverse backward, seeking out the best strategies to reach these objectives.
The optimization process is one of the key parts of the DMDU approach. Although, we usually use several AI and ML algorithms, using optimization to search for policies is central. For this optimization, we like to use a specimen of the multi-objective evolutionary algorithm (MOEA) family. They facilitate the identification of Pareto-optimal policies. These are policies that cannot be improved in one objective without negatively affecting another. The uncertainty inherent in DMDU scenarios often necessitates the consideration of multiple conflicting objectives. MOEAs, with their inherent ability to handle multiple objectives and explore a large and diverse solution space, offer an effective approach to handling such complex, multi-dimensional decision problems. They work by generating a population of potential solutions and then iteratively evolving this population through processes akin to natural selection, mutation, and recombination. In each iteration or generation, solutions that represent the most efficient trade-offs among the objectives are identified and preserved. This evolutionary process continues until a set of Pareto-optimal solutions is identified. By employing MOEAs in DMDU, decision-makers can visualize the trade-offs between competing objectives through the generated Pareto front. This enables them to understand the landscape of possible decisions and their impacts, providing valuable insights when the optimal policy is not clear-cut due to the deep uncertainty involved. Thus, MOEAs contribute significantly to the robustness and adaptability of policy decisions under complex, uncertain circumstances.
X: Exogenous Uncertainties
Here, I want to simply list a few key parameters that could be considered uncertain which we could take into account:
As you probably have noticed, these factors are rather abstract. For better operationalization, we would need to know the details of the model. In principle, any parameter that is not affected by another parameter could be marked as an exogenous uncertainty. It depends on the model at hand. If a parameter within a model is properly operationalized, we can then choose a plausible range for each parameter. The ranges of all uncertainty variables span the uncertainty space, from which we can choose or sample scenarios.
L: Policy Levers
The other inputs to the model are the policy levers. Depending on which actors are willing to follow your advice, their possible actions are potential levers that can be affected. Also here, the selection of policy levers depends on the particular model and the problem that is modeled. The selection of relevant policy levers requires thorough research though and clearer operationalization. Some ideas that float around in the AI governance scene are listed below:
Certainly, it's imperative to remain mindful that the policy levers we consider must be germane to our problem formulation, within our sphere of control, and sufficiently concrete to be actionable. In the context of AI governance, this means focusing on mechanisms and regulations that can directly impact the design, development, deployment, and use of AI systems. While the range of potential interventions might be vast, our model should hone in on those aspects we can directly influence or those that could feasibly be manipulated by the stakeholders we represent or advise. These could encompass regulations on AI transparency, safety research funding, or education programs for AI developers, among others. All considered policy actions need to be tangible, practical, and capable of precise definition within the model. Broad, ill-defined, or inaccessible policy levers can lead to vague, non-actionable, or even misleading results from the model. Hence, in crafting our agent-based model for AI governance, the judicious selection and specification of policy levers is of paramount importance.
It's crucial to underline that a policy isn't simply a single action, but a cohesive set of measures, a tapestry of interwoven strands that together create a concerted strategy. When we speak of finding an optimal policy in the context of AI governance, we're not on a hunt for a silver bullet — a solitary action that will neatly resolve all our challenges. Instead, we're seeking a potent blend of policy levers, each precisely tuned to yield the greatest collective impact. Think of it as orchestrating a symphony — each instrument plays its part, and when they're all in harmony, we create something far greater than the sum of its parts. The objective then is to discover the combinations of actions — their nature, their degree, their timing — that yield Pareto-optimal policy solutions. In other words, we aim to identify sets of actions that, when taken together, provide the best possible outcomes across our diverse set of goals and constraints, without one benefit being improved only at the expense of another. This more holistic perspective is vital for the effective governance of something as complex and multifaceted as AI.
R: Relations
In the context of the XLRM framework applied to an ABM for AI governance, "Relations" refers to the established cause-and-effect linkages and interactions that govern the behavior of the model. These encapsulate the fundamental rules of the model and shape how the system components (agents, states, etc.) interact over time and in response to various actions and external factors. They can include mathematical formulas, decision-making algorithms, probabilistic dependencies, and other types of deterministic or stochastic relationships. In an AI governance model, these could represent a wide range of interactions, such as how AI developers respond to regulations, how AI systems' behaviors change in response to different inputs and environments, how public perception of AI evolves over time, and how different policies might interact with each other. By defining these relations, we can simulate the complex dynamics of the AI governance system under a variety of conditions and policy actions. This allows us to gain insights into potential futures and the impacts of different policy options, thus supporting more informed and robust decision-making in the face of deep uncertainty.
Selecting appropriate relations for a model is a crucial and delicate task. It fundamentally shapes the model's behavior and its capacity to provide meaningful insights. When considering the integration of economic aspects into an ABM for AI governance, it would be tempting to adopt established macroeconomic theories, such as neoclassical economics. However, such an approach may not be best suited to the complex, dynamic, and deeply uncertain nature of AI governance. Consider the potential pitfalls of using something like Nordhaus' Dynamic Integrated Climate-Economy (DICE) model, which applies neoclassical economic principles to climate change economics. While the DICE model has made important contributions to our understanding of the economics of climate change, it has also been criticized for its oversimplifications and assumptions. For example, it assumes market equilibrium, perfect foresight, and rational, utility-maximizing agents, and it aggregates complex systems into just a few key variables. Now, if we were to integrate such a model with AI takeoff mechanisms, we could encounter a number of issues. For one, AI takeoff is a deeply uncertain and potentially rapid process, which might not align well with the DICE model's equilibrium assumptions and aggregated approach. Also, the model's assumption of perfect foresight and rational agents might not adequately reflect the behavior of stakeholders in the face of a fast-paced AI takeoff. It may ignore the potential for surprise, panic, or other non-rational responses, as well as the potential for unequal impacts or power dynamics. Such misalignments could lead to misleading results and poorly suited policy recommendations.
In contrast, an ABM that adopts principles from complexity economics could be a better choice. Complexity economics acknowledges the dynamic, out-of-equilibrium nature of economies, the heterogeneity of agents, and the importance of network effects and emergent (market) phenomena. It's more amenable to exploring complex, uncertain, and non-linear dynamics, such as those expected in AI governance and AI takeoff scenarios. As mentioned above, in an ABM for AI governance, we could model different types of agents (AI developers, regulators, public, AI systems themselves) with their own behaviors, objectives, and constraints, interacting in various ways. This could allow us to capture complex system dynamics, explore a wide range of scenarios, and test the impacts of various policy options in a more realistic and nuanced manner.
In conclusion, the selection of relations in a model is not a trivial task. It requires a deep understanding of the system being modeled, careful consideration of the model's purpose and scope, and a thoughtful balancing of realism, complexity, and computational feasibility. It's a task that demands not only technical expertise but also a good dose of humility, creativity, and critical thinking.
M: Performance Metrics
Performance metrics on AI governance could vary greatly depending on the specific context and purpose of the model. However, here are some potential metrics that might be generally applicable:
Remember, the key to effective performance metrics is to ensure that they align with the purpose of your model and accurately reflect the aspects of the system that you care most about. The specifics will depend on your particular context, purpose, and the nature of the AI governance system you are modeling.
3. Exploratory Analysis
With our agent-based model for AI governance now structured, our variables defined, and their relevant ranges established, we can proceed to the final, yet crucial, stage — the exploratory analysis. This step is what makes DMDU approaches so powerful; it allows us to navigate the vast landscape of possibilities and understand the space of outcomes across different policies and scenarios.
In exploratory analysis, rather than predicting a single future and optimizing for it, we explore a broad range of plausible future scenarios and analyze how various policy actions perform across these. The goal is to identify robust policies – those that perform well across a variety of possible futures, rather than merely optimizing for one presumed most likely future.
The Process in a Nutshell
Once we have initiated our large-scale, scenario-based exploration, the next critical step is identifying vulnerable scenarios. These are scenarios under which our system performs poorly on our various performance metrics. By examining these vulnerable scenarios, we gain insights into the conditions under which our proposed policies might falter. This understanding is essential in designing resilient systems capable of withstanding diverse circumstances. For each of these identified scenarios, we then use a process known as multi-objective optimization, often employing evolutionary algorithms, to find sets of policy actions that perform well. These algorithms iteratively generate, evaluate, and select policies, seeking those that offer the best trade-offs across our performance metrics. The aim here is not to find a single, 'best' policy but rather to identify a set of Pareto-optimal policies. These are policies for which no other policy exists that perform at least as well across all objectives and strictly better in at least one.
However, due to the deep uncertainty inherent in AI governance, we cannot be sure that the future will unfold according to any of our specific scenarios, even those that we've identified as vulnerable. Therefore, we re-evaluate our set of Pareto-optimal policies across a much larger ensemble of scenarios, often running into the tens of thousands. This process allows us to understand how our policies perform under a wide variety of future conditions, effectively stress-testing them for robustness. In this context, a robust policy is one that performs satisfactorily across a wide range of plausible futures[5]. For further elaboration, see this section of my previous article.
Overall, the goal of this exploratory analysis is to illuminate the landscape of potential outcomes, and in doing so, guide decision-makers towards policies that perform sufficiently well – even in particularly bleak scenarios. By applying these methods, we can navigate the deep uncertainties of AI governance with a more informed, resilient, and robust approach.
What's the Outcome of DMDU?
The culmination of the entire modeling and Decision Making under Deep Uncertainty process is a suite of robust and adaptable policies and strategies for governing AI, designed to perform well under a wide range of plausible futures. The core deliverables of this process can be roughly divided into three categories: insights, recommendations, and tools for ongoing decision-making.
1. Insights: The process of employing DMDU methodologies within AI governance ushers in a wealth of insights that can have profound impacts on the way we approach the field. These insights span across three vital facets: system dynamics, key drivers of outcomes, and potential vulnerabilities.
2. Policy Options and Trade-offs: The DMDU process generates a set of Pareto-optimal policy solutions. These are distinct combinations of policy actions that have demonstrated satisfactory performance across a spectrum of scenarios. It's important to note that these are not prescriptive "recommendations" as such, but rather a diverse suite of potential solutions. As researchers or policy analysts, our role is not to dictate a single best policy, but to articulate the various options and their respective trade-offs. Each Pareto-optimal policy solution represents a unique balance of outcomes across different objectives. The inherent trade-offs between these policy solutions reflect the complex, multi-dimensional nature of AI governance. By effectively outlining these options and elucidating their respective strengths and weaknesses, we offer stakeholders a comprehensive overview of potential paths. Our goal is to facilitate informed decision-making by providing a clear, detailed map of the policy landscape. This output should instigate thoughtful deliberation among stakeholders, who can weigh the various trade-offs in light of their own values, priorities, and risk tolerance. Through such a process, the stakeholders are empowered to select and implement the policies that best align with their objectives, all the while having a clear understanding of the associated trade-offs. This approach supports a more democratic and inclusive decision-making process, ensuring that decisions about AI governance are informed, thoughtful, and responsive to a diverse range of needs and perspectives.
3. Tools for Ongoing Decision-Making: The enduring value of the DMDU process lies not only in the immediate insights it offers but also in the arsenal of tools it provides for sustainable decision-making. These tools, derived from the extensive modeling and data analysis conducted during the process, remain invaluable resources for continued policy evaluation and strategy adaptation.
The potential impact of these deliverables in the world of AI governance is significant. Insights and policy options with trade-offs from the DMDU process can inform the development of more effective and resilient governance strategies, shape the debate around AI policy, and guide the actions of policymakers, industry leaders, and other stakeholders. The decision-support tools can enhance the capacity of these actors to make well-informed decisions in the face of uncertainty and change. Ultimately, by integrating DMDU methods into the field of AI governance, we can enhance our collective capacity to navigate the challenges and uncertainties of AI, mitigate potential risks, and harness the immense potential of AI in a way that aligns with our societal values and goals.
Conclusion
In an era where AI technologies play a pivotal role in shaping our world, the necessity to recalibrate our approach to AI governance is paramount. This article seeks to underline the critical need for embracing innovative methodologies such as DMDU to tackle the intricacies and challenges this transformative technology brings.
DMDU methodology, with its emphasis on considering a broad spectrum of possible futures, encourages an AI-driven approach to AI governance that is inherently robust, adaptable, and inclusive. It's a methodology well-equipped to grapple with socio-technical complexities, the vast diversity of stakeholder values, deep uncertainties, and the need for robust, evolving policies – all core aspects of AI governance.
As we look ahead, we are faced with a choice. We can continue to navigate the terrain of AI governance with the existing tools and maps or we can reboot our journey, equipped with a new compass – DMDU and similar methodologies. The latter offers a means of moving forward that acknowledges and embraces the complexity and dynamism of AI and its governance.
Referencing back to the title, this 'rebooting' signifies a shift from traditional, deterministic models of governance to one that embraces uncertainty and complexity. It's an AI-driven approach to AI governance, where AI is not just the subject of governance but also a tool to better understand and navigate the complexities of governance itself. In a world growing more reliant on AI technologies, the application of methodologies like DMDU in AI governance might provide a fresh perspective. As stakeholders in AI governance – encompassing policy makers, AI developers, civil society organizations, and individual users – it may be beneficial to explore and consider the potential value of systems modeling and decision-making under deep uncertainty. Incorporating such approaches could potentially enhance our strategies and policies, making them more robust and adaptable. This could, in turn, better equip us to address the broad spectrum of dynamic and evolving challenges that AI presents.
In conclusion, rebooting AI governance is not just a call for an AI-driven approach to AI governance, but also an invitation to embrace the uncertainties, complexities, and opportunities that lie ahead. By adopting DMDU, we can ensure our path forward in AI governance is not just responsive to the needs of today, but resilient and adaptable enough to navigate the future. As we continue to traverse this dynamic landscape, let's commit to a future where AI is governed in a manner that's as innovative and forward-thinking as the technology itself.
For simplicity's sake, within the scope of this article, I refer to the combination of systems modeling AND decision-making under deep uncertainty (DMDU) as simply DMDU.
Further explanations of the terminology can be found in the introduction article.
For example suggested in Shavit, Y. (2023). What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring. arXiv preprint arXiv:2303.11341.
As recently promoted by Sam Altman at his Congress hearing.
There are more stringent definitions of robustness which can depend on a plethora of factors, including how risk-averse our stakeholders are with respect to particular performance metrics. E.g., we might be extremely risk-averse when it comes to the variable number of causalities. For a good analysis of various robustness metrics, see McPhail, C., Maier, H. R., Kwakkel, J. H., Giuliani, M., Castelletti, A., & Westra, S. (2018). Robustness metrics: How are they calculated, when should they be used and why do they give different results?. Earth's Future, 6(2), 169-191.