The AI strategy space is currently bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve, as well as by a lack of current institutional capacity to absorb and utilize new researchers effectively.
Accordingly, there is very strong demand for people who are good at this type of “disentanglement” research and well-suited to conduct it somewhat independently. There is also demand for some specific types of expertise which can help advance AI strategy and policy. Advancing this research even a little bit can have massive multiplicative effects by opening up large areas of work for many more researchers and implementers to pursue.
Until the AI strategy research bottleneck clears, many areas of concrete policy research and policy implementation are necessarily on hold. Accordingly, a large majority of people interested in this cause area, even extremely talented people, will find it difficult to contribute directly, at least in the near term.
If you are in this group whose talents and expertise are outside of these narrow areas, and want to contribute to AI strategy, I recommend you build up your capacity and try to put yourself in an influential position. This will set you up well to guide high-value policy interventions as clearer policy directions emerge. Try not to be discouraged or dissuaded from pursuing this area by the current low capacity to directly utilize your talent! The level of talent across a huge breadth of important areas I have seen from the EA community in my role at FHI is astounding and humbling.
Depending on how slow these “entangled” research questions are to unjam, and on the timelines of AI development, there might be a very narrow window of time in which it will be necessary to have a massive, sophisticated mobilization of altruistic talent. This makes being prepared to mobilize effectively and take impactful action on short notice extremely valuable in expectation.
In addition to strategy research, operations work in this space is currently highly in demand. Experienced managers and administrators are especially needed. More junior operations roles might also serve as a good orientation period for EAs who would like to take some time after college before either pursuing graduate school or a specific career in this space. This can be a great way to tool up while we as a community develop insight on strategic and policy direction. Additionally, successful recruitment in this area should help with our institutional capacity issues substantially.
(3600 words. Reading time: approximately 15 minutes with endnotes.)
Intended audience: This post is aimed at EAs and other altruistic types who are already interested in working in AI strategy and AI policy because of its potential large scale effect on the future.[1]
Epistemic status: The below represents my current best guess at how to make good use of human resources given current constraints. I might be wrong, and I would not be surprised if my views changed with time. That said, my recommendations are designed to be robustly useful across most probable scenarios. These are my personal thoughts, and do not necessarily represent the views of anyone else in the community or at the Future of Humanity Institute.[2] (For some areas where reviewers disagreed, I have added endnotes explaining the disagreement.) This post is not me acting in any official role, this is just me as an EA community member who really cares about this cause area trying to contribute my best guess for how to think about and cultivate this space.
Why my thoughts might be useful: I have been the primary recruitment person at the Future of Humanity Institute (FHI) for over a year, and am currently the project manager for FHI’s AI strategy programme. Again, I am not writing this in either of these capacities, but being in these positions has given me a chance to see just how talented the community is, to spend a lot of time thinking about how to best utilize this talent, and has provided me some amazing opportunities to talk with others about both of these things.
Definitions
There are lots of ways to slice this space, depending on what exactly you are trying to see, or what point you are trying to make. The terms and definitions I am using are a bit tentative and not necessarily standard, so feel free to discard them after reading this. (These are also not all of the relevant types or areas of research or work, but the subset I want to focus on for this piece.)[3]
AI strategy research:[4] the study of how humanity can best navigate the transition to a world with advanced AI systems (especially transformative AI), including political, economic, military, governance, and ethical dimensions.
AI policy implementation is carrying out the activities necessary to safely navigate the transition to advanced AI systems. This includes an enormous amount of work that will need to be done in government, the political sphere, private companies, and NGOs in the areas ofcommunications, fund allocation, lobbying, politics, and everything else that is normally done to advance policy objectives.
Operations (in support of AI strategy and implementation) is building, managing, growing, and sustaining all of the institutions and institutional capacity for the organizations advancing AI strategy research and AI policy implementation. This is frequently overlooked, badly neglected, and extremely important and impactful work.
Disentanglement research:[5] This is a squishy made-up term I am using only for this post that is sort of trying to gesture at a type of research that involves disentangling ideas and questions in a “pre-paradigmatic” area where the core concepts, questions, and methodologies are under-defined. In my mind, I sort of picture this as somewhat like trying to untangle knots in what looks like an enormous ball of fuzz. (Nick Bostrom is a fantastic example of someone who is excellent at this type of research.)
To quickly clarify, as I mean to use the terms, AI strategy research is an area or field of research, a bit like quantum mechanics or welfare economics. Disentanglement research I mean more as a type of research, a bit like quantitative research or conceptual analysis, and is defined more by the character of the questions researched and the methods used to advance toward clarity. Disentanglement is meant to be field agnostic. The relationship between the two is that, in my opinion, AI strategy research is an area that at its current early stage, demands a lot of disentanglement-type research to advance.
The current bottlenecks in the space (as I see them)
Disentanglement research is needed to advance AI strategy research, and is extremely difficult
Figuring out a good strategy for approaching the development and deployment of advanced AI requires addressing enormous, entangled, under-defined questions, which exist well outside of most existing research paradigms. (This is not all it requires, but it is a central part of it at its current stage of development.)[6] This category includes the study of multi-polar versus unipolar outcomes, technical development trajectories, governance design for advanced AI, international trust and cooperation in the development of transformative capabilities, info/attention/reputation hazards in AI-related research, the dynamics of arms races and how they can be mitigated, geopolitical stabilization and great power war mitigation, research openness, structuring safe R&D dynamics, and many more topics.[7] It also requires identifying other large, entangled questions such as these to ensure no crucial considerations in this space are neglected.
From my personal experience trying and failing to do good disentanglement research and watching as some much smarter and more capable people have tried and struggled as well, I have come to think of it as a particular skill or aptitude that does not necessarily correlate strongly with other talents or expertise. A bit like mechanical, mathematical, or language aptitude. I have no idea what makes people good at this, or how exactly they do it, but it is pretty easy to identify if it has been done well once the person is finished. (I can appreciate the quality of Nick Bostrom’s work, like I can appreciate a great novel, but how they are created I don’t really understand and can’t myself replicate.) It also seems to be both quite rare and very difficult to identify in advance who will be good at this sort of work, with the only good indicator, as far as I can tell, being past history of succeeding in this type of research. The result is that it is really hard to recruit for, there are very few people doing it full time in the AI strategy space, and this number is far, far fewer than optimal.
The main importance of disentanglement research, as I imagine it, is that it makes questions and research directions clearer and more tractable for other types of research. As Nick Bostrom and others have sketched out the considerations surrounding the development of advanced AI through “disentanglement”, tractable research questions have arisen. I strongly believe that as more progress is made on topics requiring disentanglement in the AI strategy field, more tractable research questions will arise. As these more tractable questions become clear, and as they are studied, strategic direction, and concrete policy recommendations should follow. I believe this then will open up the floodgates for AI policy implementation work.
Domain experts with specific skills and knowledge are also needed
While I think that our biggest need right now is disentanglement research, there are also certain other skills and knowledge sets that would be especially helpful for advancing AI strategy research. This includes expertise in:
Mandarin and/or Chinese politics and/or the Chinese ML community.
International relations, especially in the areas of international cooperation, international law, global public goods, constitution and institutional design, history and politics of transformative technologies, governance, and grand strategy.
Knowledge and experience working at a high level in policy, international governance and diplomacy, and defense circles.
Technology and other types of forecasting.
Quantitative social science, such as economics or analysis of survey data.
Law and/or Policy.
I expect these skills and knowledge sets to help provide valuable insight on strategic questions including governance design, diplomatic coordination and cooperation, arms race dynamics, technical timelines and capabilities, and many more areas.
Until AI strategy advances, AI policy implementation is mostly stalled
There is a wide consensus in the community, with which I agree, that aside from a few robust recommendations,[8] it is important not to act or propose concrete policy in this space prematurely. We simply have too much uncertainty about the correct strategic direction. Do we want tighter or looser IP law for ML? Do we want a national AI lab? Should the government increase research funding in AI? How should we regulate lethal autonomous weapons systems? Should there be strict liability for AI accidents? It remains unclear what are good recommendations. There are path dependencies that develop quickly in many areas once a direction is initially started down. It is difficult to pass a law that is the exact opposite of a previous law recently lobbied for and passed. It is much easier to start an arms race than to stop it. With most current AI policy questions, the correct approach, I believe, is not to use heuristics of unclear applicability to choose positions, even if those heuristics have served well in other contexts,[9] but to wait until the overall strategic picture is clear, and then to push forward with whatever advances the best outcome.
The AI strategy and policy space, and EA in general, is also currently bottlenecked by institutional and operational capacity
This is not as big an immediate problem as the AI strategy bottleneck, but it is an issue, and one that exacerbates the research bottleneck as well.[10] FHI alone will need to fill 4 separate operations roles at senior and junior levels in the next few months. Other organizations in this space have similar shortages. These shortages also compound the research bottleneck as they make it difficult to build effective, dynamic AI strategy research groups. The lack of institutional capacity also might become a future hindrance to the massive, rapid, “AI policy implementation” mobilization which is likely to be needed.
Next actions
First, I want to make clear, that if you want to work in this space, you are wanted in this space. There is a tremendous amount of need here. That said, as I currently see it, because of the low tractability of disentanglement research, institutional constraints, and the effect of both of these things on the progress of AI strategy research, a large majority of people who are very needed in this area, even extremely talented people, will not be able to directly contribute immediately. (This is not a good position we are currently in, as I think we are underutilizing our human resources, but hopefully we can fix this quickly.)
This is why I am hoping that we can build up a large community of people with a broader set of skills, and especially policy implementation skills, who are in positions of influence from which they can mobilize quickly and effectively and take important action once the bottleneck clears and direction comes into focus.
Actions you can take right now
Read all the things! There are a couple of publications in the pipeline from FHI, including a broad research agenda that should hopefully advance the field a bit. Sign up to FHI’s newsletter and the EA newsletter which will have updates as the cause area advances and unfolds. There is also an extensive reading list, not especially narrowly tailored to the considerations of interest to our community, but still quite useful. I recommend skimming it and picking out some specific publications or areas to read more about.[11] Try to skill up in this area and put yourself in a position to potentially advance policy when the time comes. Even if it is inconvenient, go to EA group meet-ups and conferences, read and contribute to the forums and newsletters, keep in the loop. Be an active and engaged community member.
Potential near term roles in AI Strategy
FHI is recruiting, but somewhat capacity limited, and trying to triage for advancing strategy as quickly as possible.
If you have good reason to think you would be good at disentanglement research on AI strategy (likely meaning a record of success with this type of research) or have expertise in the areas listed as especially in demand, please get in touch.[12] I would strongly encourage you to do this even if you would rather not work at FHI, as there are remote positions possible if needed, and other organizations I can refer you to. I would also strongly encourage you to do this even if you are reluctant to stop or put on hold whatever you are currently doing. Please also encourage your friends who likely would be good at this to strongly consider it. If I am correct, the bottleneck in this space is holding back a lot of potentially vital action by many, many people who cannot be mobilized until they have a direction in which to push. (The framers need the foundation finished before they can start.) Anything you can contribute to advancing this field of research will have dramatic force multiplicative effects by “creating jobs” for dozens or hundreds of other researchers and implementers. You should also consider applying for one or both of the AI Macrostrategy roles at FHI if you see this before 29 Sept 2017.[13]
If you are unsure of your skill with disentanglement research, I would strongly encourage you to try to make some independent progress on a question of this type and see how you do. I realize this task itself is a bit under-defined, but that is also really part of the problem space itself, and the thing you are trying to test your skills with. Read around in the area, find something sticky you think you might be able to disentangle, and take a run at it.[14] If it goes well, whether or not you want to get into the space immediately, please send it in.
If you feel as though you might be a borderline candidate because of your relative inexperience with an area of in-demand expertise, you might consider trying to tool up a bit in the area, or applying for an internship. You might also err on the side of sending in a CV and cover letter just in case you are miscalibrated about your skill compared to other applicants. That said, again, do not think that you not being immediately employed is any reflection of your expected value in this space! Do not be discouraged, please stay interested, and continue to pursue this!
Preparation for mobilization
Being a contributor to this effort, as I imagine it, requires investing in yourself, your career, and the community, while positioning yourself well for action once the bottleneck unjams and and robust strategic direction is clearer.
I also highly recommend investing in building up your skills and career capital. This likely means excelling in school, going to graduate school, pursuing relevant internships, building up your CV, etc. Invest heavily in yourself. Additionally, stay in close communication with the EA community and keep up to date with opportunities in this space as they develop. (Several people are currently looking at starting programs specifically to on-ramp promising people into this space. This is one reason why signing up to the newsletters might be really valuable, so that opportunities are not missed.) To repeat myself from above, attend meet-ups and conferences, read the forums and newsletters, and be active in the community. Ideally this cause area will become a sub-community within EA and a strong self-reinforcing career network.
A good way to determine how to prepare and tool up for a career in either AI policy research or implementation is to look at the 80,000 Hours’ Guide to working in AI policy and strategy. Fields of study that are likely to be most useful for AI policy implementation include policy, politics and international relations, quantitative social sciences, and law.
Especially useful is finding roles of influence or importance, even with low probability but high expected value, within (especially the US federal) government.[15] Other potentially useful paths include non-profit management, project management, communications, public relations, grantmaking, policy advising at tech companies, lobbying, party and electoral politics and advising, political “staffing,” or research within academia, thinks tanks, or large corporate research groups especially in the areas of machine learning, policy, governance, law, defense, and related. A lot of information about the skills needed for various sub-fields within this area are available at 80,000 Hours.
Working in operations
Another important bottleneck in this space, though smaller in my estimation than the main bottleneck, is in institutional capacity within this currently tiny field. As mentioned already above, FHI needs to fill 4 separate operations roles at senior and junior levels in the next few months. (We are also in need of a temporary junior-level operations person immediately, if you are a UK citizen, consider getting in touch about this!)[16][17] Other organizations in this space have similar shortages. If you are an experienced manager, administrator, or similar, please consider applying or getting in touch for our senior roles. Alternatively, if you are freshly out of school, but have some proven hustle (especially proven by extensive extracurricular involvement, such as running projects or groups) and would potentially like to take a few years to advance this cause area before going to graduate school or locking in a career path, consider applying for a junior operations position, or get in touch.[18] Keep in mind that operations work at an organization like FHI can be a fantastic way to tool up and gain fluency in this space, orient yourself, discover your strengths and interests, and make contacts, even if one intends to move on to non-operations roles eventually.
Conclusion
The points I hope you can take away in approximate order of importance:
1)If you are interested in advancing this area, stay involved. Your expected value is extremely high, even if there are no excellent immediate opportunities to have a direct impact. Please join this community, and build up your capacity for future research and policy impact in this space.
2)If you are good at “disentanglement research” please get in touch, as I think this is our major bottleneck in the area of AI strategy research, and is preventing earlier and broader mobilization and utilization of our community’s talent.
3)If you are strong or moderately strong in key high-value areas, please also get in touch. (Perhaps err to the side of getting in touch if you are unsure.)
4)Excellent things to do to add value to this area, in expectation, include:
a)Investing in your skills and career capital, especially in high-value areas, such as studying in-demand topics.
b)Building a career in a position of influence (especially in government, global institutions, or in important tech firms.)
c)Helping to build up this community and its capacity, including building a strong and mutually reinforcing career network among people pursuing AI policy implementation from an EA or altruistic perspective.
5)Also of very high value is operations work and other efforts to increase institutional capacity.
Thank you for taking the time to read this. While it is very unfortunate that the current ground reality is, as far as I can tell, not well structured for immediate wide mobilization, I am confident that we can do a great deal of preparatory and positioning work as a community, and that with some forceful pushing on these bottlenecks, we can turn this enormous latent capacity into extremely valuable impact.
Let’s getting going “doing good together” as we navigate this difficult area, and help make a tremendous future!
Endnotes:
[1] For those of you not in this category who are interested in seeing why you might want to be, I recommend this short EA Global talk, the Policy Desiderata paper, and OpenPhil’s analysis. For a very short consideration on why the far future matters, I recommend this very short piece, and for a quick fun primer on AI as transformative I recommend this. Finally, once the hook is set, the best resource remains Superintelligence.
[2] Relatedly, I want to thank Miles Brundage, Owen Cotton-Barratt, Allan Dafoe, Ben Garfinkel, Roxanne Heston, Holden Karnofsky, Jade Leung, Kathryn Mecrow, Luke Muehlhauser, Michael Page, Tanya Singh, and Andrew Snyder-Beattie for their comments on early drafts of this post. Their input dramatically improved it. That said, again, they should not be viewed as endorsing anything in this. All mistakes are mine. All views are mine.)
[3] There are some interesting tentative taxonomies and definitions of the research space floating around. I personally find the following, quoting from a draft document by Allan Dafoe, especially useful:
AI strategy [can be divided into]... four complementary research clusters: the technical landscape, AI politics, AI governance, and AI policy. Each of these clusters characterizes a set of problems and approaches, within which the density of conversation is likely to be greater. However, most work in this space will need to engage the other clusters, drawing from and contributing high-level insights. This framework can perhaps be clarified by analogy to the problem of building a new city. The technical landscape examines the technical inputs and constraints to the problem, such as trends in the price and strength of steel. Politics considers the contending motivations of various actors (such as developers, residents, businesses), the possible mutually harmful dynamics that could arise and strategies for cooperating to overcome them. Governance involves understanding the ways that infrastructure, laws, and norms can be used to build the best city, and proposing ideal masterplans of these to facilitate convergence on a common good vision. The policy cluster involves crafting the actual policies to be implemented to build this city.
In a comment on this draft, Jade Leung pointed out what I think is an important implicit gap in the terms I am using, and highlights the importance of not treating these as either final, comprehensive, or especially applicable outside of this piece:
There seems to be a gap between [AI policy implementation] and 'AI strategy research' - where does the policy research feed in? I.e. the research required to canvas and analyse policy mechanisms by which strategies are most viably realised, prior to implementation (which reads here more as boots-on-the-ground alliance building, negotiating, resource distribution etc.)
[4] Definition lightly adapted from Allan Dafoe and Luke Muehlhauser.
[5]This idea owes a lot to conversations with Owen Cotton-Barratt, Ben Garfinkel, and Michael Page.
[6] I did not get a sense that any reviewer necessarily disagreed that this is a fair conceptualization of a type of research in this space, though some questioned its importance or centrality to current AI strategy research. I think the central disagreement here is on how many well-defined and concrete questions there are left to answer at the moment, how far answering them is likely to go in bringing clarity to this space and developing robust policy recommendations, and the relative marginal value of addressing these existing questions versus producing more through disentanglement of the less well defined areas.
[7] One commenter did not think these were a good sample of important questions. Obviously this might be correct, but in my opinion, these are absolutely among the most important questions to gain clarity on quickly.
[8] My personal opinion is that there are only three or maybe four robust policy-type recommendations we can make to governments at this time, given our uncertainty about strategy: 1) fund safety research, 2) commit to a common good principle, and 3) avoid an arms races. The fourth suggestion is both an extension of the other three and is tentative, but is something like: fund joint intergovernmental research projects located in relatively geopolitically neutral countries with open membership and a strong commitment to a common good principle.
I should note that this point was also flagged as potentially controversial by one reviewer. Additionally, Miles Brundage, quoted below, had some useful thoughts related to my tentative fourth suggestion:
In general, detailed proposals at this stage are unlikely to be robust due to the many gaps in our strategic and empirical knowledge. We "know" arms races are probably bad but there are many imaginable ways to avoid or mitigate them, and we don't really know what the best approach is yet. For example, launching big new projects might introduce various opportunities for leakage of information that weren't there before, and politicize the issue more than might be optimal as the details are worked out. As an example of an alternative, governments could commit to subsidizing (e.g. through money and hardware access) existing developers that open themselves up to inspections, which would have some advantages and some disadvantages over the neutrally-sited new project approach.
[9] This is an area with extreme and unusual enough considerations that it seems to break normal heuristics, or at least my normal heuristics. I have personally heard at least minimally plausible arguments made by thoughtful people that openness, antitrust law and competition, government regulation, advocating opposition to lethal autonomous weapons systems, and drawing wide attention to the problems of AI might be bad things, and invasive surveillance, greater corporate concentration, and weaker cyber security might be good things. (To be clear, these were all tentative, weak, but colourable arguments, made as part of exploring the possibility space, not strongly held positions by anyone.) I find all of these very counter-intuitive.
[10] A useful comment from a reviewer on this point: “These problems are related: We desperately need new institutions to house all the important AI strategy work, but we can't know what institutions to build until we've answer more of the foundational questions.”
[11] Credit for the heroic effort of assembling this goes mostly to Matthijs Maas. While I contributed a little, I have myself only read a tiny fraction of these.
[12] fhijobs@philosophy.ox.ac.uk.
[13] Getting in touch is a good action even if you can not or would rather not work at FHI. In my opinion, AI strategy researchers would ideally cluster in one or more research groups in order to advance this agenda as quickly as possible, but there is also some room for remote scholarship. (The AI strategy programme at FHI is currently trying to become the first of these “cluster” research groups, and we are recruiting in this area aggressively.)
[14] I’m personally bad enough at this, that my best advice is something like read around in the area, find a topic, and “do magic.” Accordingly, I will tag in Jade Leung again for a suggestion of what a “sensible, useful deliverable of 'disentanglement research' would look like”:
A conceptual model for a particular interface of the AI strategy space, articulating the sub-components, exogenous and endogenous variables of relevance, linkages etc.; An analysis of driver-pressure-interactions for a subset of actors; a deconstruction of a potential future scenario into mutually-exclusive-collectively-exhaustive (MECE) hypotheses.
Ben Garfinkel similarly volunteered to help clarify “by giving an example of a very broad question that seem[s] to require some sort of "detangling" skill:”
What does the space of plausible "AI development scenarios" look like, and how do their policy implications differ?
If AI strategy is "the study of how humanity can best navigate the transition to a world with advanced AI systems," then it seems like it ought to be quite relevant what this transition will look like. To point at two different very different possibilities, there might be a steady, piecemeal improvement of AI capabilities -- like the steady, piecemeal improvement of industrial technology that characterized the industrial revolution -- or there might be a discontinuous jump, enabled by sudden breakthroughs or an "intelligence explosion," from roughly present-level systems to systems that are more capable than humans at nearly everything. Or -- more likely -- there might be a transition that doesn't look much like either of these extremes.
Robin Hanson, Eliezer Yudkowsky, Eric Drexler, and others have all emphasized different visions of AI development, but have also found it difficult to communicate the exact nature of their views to one another. (See, for example, the Hanson-Yudkowsky "foom" debate.) Furthermore, it seems to me that their visions don't cleanly exhaust the space, and will naturally be difficult to define given the fact that so many of the relevant concepts--like "AGI," "recursive self-improvement," "agent/tool/goal-directed AI," etc.--are currently so vague.
I think it would be very helpful to have a good taxonomy of scenarios, so that we could begin to make (less ambiguous) statements like, "Policy X would be helpful in scenarios A and B, but not in scenario C," or, "If possible, we ought to try to steer towards scenario A and away from B." AI strategy is not there yet, though.
A related, "entangled" question is: Across different scenarios, what is the relationship between short and medium-term issues (like the deployment of autonomous weapons systems, or the automation of certain forms of cyberattacks) and the long-term issues that are likely to arise as the space of AI capabilities starts to subsume the space of human capabilities? For a given scenario, can these two (rough) categories of issues be cleanly "pulled apart"?
[15] 80,000 hours is experimenting with having a career coach specialize in this area, so you might consider getting in touch with them, or getting in touch with them again, if you might be interested in pursuing this route.
[16] fhijobs@philosophy.ox.ac.uk. This is how I snuck into FHI ~2 years ago, on a 3 week temporary contract as an office manager. I flew from the US on 4 days notice for the chance to try to gain fluency in the field. While my case of “working my way up from the mail room” is not likely to be typical (I had a strong CV), or necessarily a good model to encourage (see next footnote below) it is definitely the case that you can pick up a huge amount through osmosis at FHI, and develop a strong EA career network. This can set you up well for a wise choice of graduate programs or other career direction decisions.
[17] One reviewer cautioned against encouraging a dynamic in which already highly qualified people take junior operations roles with the expectation of transitioning directly into a research position, since this can create awkward dynamics and a potentially unhealthy institutional culture. I think this is probably, or at least plausibly, correct. Accordingly, while I think a junior operations role is great for building skills and orienting yourself, it should probably not be seen as a way of immediately transitioning to strategy research, but treated more as a method for turning post-college uncertainty into a productive plan, while also gaining valuable skills and knowledge, and directly contributing to very important work.
[18] Including locking in a career path continuing in operations. This really is an extremely high-value area for a career, and badly overlooked and neglected.
By "disentanglement research" do you mean establishing a conceptual framework? some overarching structure which lets you see all the individual pieces arranged in a sensible and meaningful way?
Summary:
The AI strategy space is currently bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve, as well as by a lack of current institutional capacity to absorb and utilize new researchers effectively.
Accordingly, there is very strong demand for people who are good at this type of “disentanglement” research and well-suited to conduct it somewhat independently. There is also demand for some specific types of expertise which can help advance AI strategy and policy. Advancing this research even a little bit can have massive multiplicative effects by opening up large areas of work for many more researchers and implementers to pursue.
Until the AI strategy research bottleneck clears, many areas of concrete policy research and policy implementation are necessarily on hold. Accordingly, a large majority of people interested in this cause area, even extremely talented people, will find it difficult to contribute directly, at least in the near term.
If you are in this group whose talents and expertise are outside of these narrow areas, and want to contribute to AI strategy, I recommend you build up your capacity and try to put yourself in an influential position. This will set you up well to guide high-value policy interventions as clearer policy directions emerge. Try not to be discouraged or dissuaded from pursuing this area by the current low capacity to directly utilize your talent! The level of talent across a huge breadth of important areas I have seen from the EA community in my role at FHI is astounding and humbling.
Depending on how slow these “entangled” research questions are to unjam, and on the timelines of AI development, there might be a very narrow window of time in which it will be necessary to have a massive, sophisticated mobilization of altruistic talent. This makes being prepared to mobilize effectively and take impactful action on short notice extremely valuable in expectation.
In addition to strategy research, operations work in this space is currently highly in demand. Experienced managers and administrators are especially needed. More junior operations roles might also serve as a good orientation period for EAs who would like to take some time after college before either pursuing graduate school or a specific career in this space. This can be a great way to tool up while we as a community develop insight on strategic and policy direction. Additionally, successful recruitment in this area should help with our institutional capacity issues substantially.
(3600 words. Reading time: approximately 15 minutes with endnotes.)
(Also posted to Effective Altruism Forum here.)
Introduction
Intended audience: This post is aimed at EAs and other altruistic types who are already interested in working in AI strategy and AI policy because of its potential large scale effect on the future.[1]
Epistemic status: The below represents my current best guess at how to make good use of human resources given current constraints. I might be wrong, and I would not be surprised if my views changed with time. That said, my recommendations are designed to be robustly useful across most probable scenarios. These are my personal thoughts, and do not necessarily represent the views of anyone else in the community or at the Future of Humanity Institute.[2] (For some areas where reviewers disagreed, I have added endnotes explaining the disagreement.) This post is not me acting in any official role, this is just me as an EA community member who really cares about this cause area trying to contribute my best guess for how to think about and cultivate this space.
Why my thoughts might be useful: I have been the primary recruitment person at the Future of Humanity Institute (FHI) for over a year, and am currently the project manager for FHI’s AI strategy programme. Again, I am not writing this in either of these capacities, but being in these positions has given me a chance to see just how talented the community is, to spend a lot of time thinking about how to best utilize this talent, and has provided me some amazing opportunities to talk with others about both of these things.
Definitions
There are lots of ways to slice this space, depending on what exactly you are trying to see, or what point you are trying to make. The terms and definitions I am using are a bit tentative and not necessarily standard, so feel free to discard them after reading this. (These are also not all of the relevant types or areas of research or work, but the subset I want to focus on for this piece.)[3]
AI strategy research:[4] the study of how humanity can best navigate the transition to a world with advanced AI systems (especially transformative AI), including political, economic, military, governance, and ethical dimensions.
AI policy implementation is carrying out the activities necessary to safely navigate the transition to advanced AI systems. This includes an enormous amount of work that will need to be done in government, the political sphere, private companies, and NGOs in the areas of communications, fund allocation, lobbying, politics, and everything else that is normally done to advance policy objectives.
Operations (in support of AI strategy and implementation) is building, managing, growing, and sustaining all of the institutions and institutional capacity for the organizations advancing AI strategy research and AI policy implementation. This is frequently overlooked, badly neglected, and extremely important and impactful work.
Disentanglement research:[5] This is a squishy made-up term I am using only for this post that is sort of trying to gesture at a type of research that involves disentangling ideas and questions in a “pre-paradigmatic” area where the core concepts, questions, and methodologies are under-defined. In my mind, I sort of picture this as somewhat like trying to untangle knots in what looks like an enormous ball of fuzz. (Nick Bostrom is a fantastic example of someone who is excellent at this type of research.)
To quickly clarify, as I mean to use the terms, AI strategy research is an area or field of research, a bit like quantum mechanics or welfare economics. Disentanglement research I mean more as a type of research, a bit like quantitative research or conceptual analysis, and is defined more by the character of the questions researched and the methods used to advance toward clarity. Disentanglement is meant to be field agnostic. The relationship between the two is that, in my opinion, AI strategy research is an area that at its current early stage, demands a lot of disentanglement-type research to advance.
The current bottlenecks in the space (as I see them)
Disentanglement research is needed to advance AI strategy research, and is extremely difficult
Figuring out a good strategy for approaching the development and deployment of advanced AI requires addressing enormous, entangled, under-defined questions, which exist well outside of most existing research paradigms. (This is not all it requires, but it is a central part of it at its current stage of development.)[6] This category includes the study of multi-polar versus unipolar outcomes, technical development trajectories, governance design for advanced AI, international trust and cooperation in the development of transformative capabilities, info/attention/reputation hazards in AI-related research, the dynamics of arms races and how they can be mitigated, geopolitical stabilization and great power war mitigation, research openness, structuring safe R&D dynamics, and many more topics.[7] It also requires identifying other large, entangled questions such as these to ensure no crucial considerations in this space are neglected.
From my personal experience trying and failing to do good disentanglement research and watching as some much smarter and more capable people have tried and struggled as well, I have come to think of it as a particular skill or aptitude that does not necessarily correlate strongly with other talents or expertise. A bit like mechanical, mathematical, or language aptitude. I have no idea what makes people good at this, or how exactly they do it, but it is pretty easy to identify if it has been done well once the person is finished. (I can appreciate the quality of Nick Bostrom’s work, like I can appreciate a great novel, but how they are created I don’t really understand and can’t myself replicate.) It also seems to be both quite rare and very difficult to identify in advance who will be good at this sort of work, with the only good indicator, as far as I can tell, being past history of succeeding in this type of research. The result is that it is really hard to recruit for, there are very few people doing it full time in the AI strategy space, and this number is far, far fewer than optimal.
The main importance of disentanglement research, as I imagine it, is that it makes questions and research directions clearer and more tractable for other types of research. As Nick Bostrom and others have sketched out the considerations surrounding the development of advanced AI through “disentanglement”, tractable research questions have arisen. I strongly believe that as more progress is made on topics requiring disentanglement in the AI strategy field, more tractable research questions will arise. As these more tractable questions become clear, and as they are studied, strategic direction, and concrete policy recommendations should follow. I believe this then will open up the floodgates for AI policy implementation work.
Domain experts with specific skills and knowledge are also needed
While I think that our biggest need right now is disentanglement research, there are also certain other skills and knowledge sets that would be especially helpful for advancing AI strategy research. This includes expertise in:
Mandarin and/or Chinese politics and/or the Chinese ML community.
International relations, especially in the areas of international cooperation, international law, global public goods, constitution and institutional design, history and politics of transformative technologies, governance, and grand strategy.
Knowledge and experience working at a high level in policy, international governance and diplomacy, and defense circles.
Technology and other types of forecasting.
Quantitative social science, such as economics or analysis of survey data.
Law and/or Policy.
I expect these skills and knowledge sets to help provide valuable insight on strategic questions including governance design, diplomatic coordination and cooperation, arms race dynamics, technical timelines and capabilities, and many more areas.
Until AI strategy advances, AI policy implementation is mostly stalled
There is a wide consensus in the community, with which I agree, that aside from a few robust recommendations,[8] it is important not to act or propose concrete policy in this space prematurely. We simply have too much uncertainty about the correct strategic direction. Do we want tighter or looser IP law for ML? Do we want a national AI lab? Should the government increase research funding in AI? How should we regulate lethal autonomous weapons systems? Should there be strict liability for AI accidents? It remains unclear what are good recommendations. There are path dependencies that develop quickly in many areas once a direction is initially started down. It is difficult to pass a law that is the exact opposite of a previous law recently lobbied for and passed. It is much easier to start an arms race than to stop it. With most current AI policy questions, the correct approach, I believe, is not to use heuristics of unclear applicability to choose positions, even if those heuristics have served well in other contexts,[9] but to wait until the overall strategic picture is clear, and then to push forward with whatever advances the best outcome.
The AI strategy and policy space, and EA in general, is also currently bottlenecked by institutional and operational capacity
This is not as big an immediate problem as the AI strategy bottleneck, but it is an issue, and one that exacerbates the research bottleneck as well.[10] FHI alone will need to fill 4 separate operations roles at senior and junior levels in the next few months. Other organizations in this space have similar shortages. These shortages also compound the research bottleneck as they make it difficult to build effective, dynamic AI strategy research groups. The lack of institutional capacity also might become a future hindrance to the massive, rapid, “AI policy implementation” mobilization which is likely to be needed.
Next actions
First, I want to make clear, that if you want to work in this space, you are wanted in this space. There is a tremendous amount of need here. That said, as I currently see it, because of the low tractability of disentanglement research, institutional constraints, and the effect of both of these things on the progress of AI strategy research, a large majority of people who are very needed in this area, even extremely talented people, will not be able to directly contribute immediately. (This is not a good position we are currently in, as I think we are underutilizing our human resources, but hopefully we can fix this quickly.)
This is why I am hoping that we can build up a large community of people with a broader set of skills, and especially policy implementation skills, who are in positions of influence from which they can mobilize quickly and effectively and take important action once the bottleneck clears and direction comes into focus.
Actions you can take right now
Read all the things! There are a couple of publications in the pipeline from FHI, including a broad research agenda that should hopefully advance the field a bit. Sign up to FHI’s newsletter and the EA newsletter which will have updates as the cause area advances and unfolds. There is also an extensive reading list, not especially narrowly tailored to the considerations of interest to our community, but still quite useful. I recommend skimming it and picking out some specific publications or areas to read more about.[11] Try to skill up in this area and put yourself in a position to potentially advance policy when the time comes. Even if it is inconvenient, go to EA group meet-ups and conferences, read and contribute to the forums and newsletters, keep in the loop. Be an active and engaged community member.
Potential near term roles in AI Strategy
FHI is recruiting, but somewhat capacity limited, and trying to triage for advancing strategy as quickly as possible.
If you have good reason to think you would be good at disentanglement research on AI strategy (likely meaning a record of success with this type of research) or have expertise in the areas listed as especially in demand, please get in touch.[12] I would strongly encourage you to do this even if you would rather not work at FHI, as there are remote positions possible if needed, and other organizations I can refer you to. I would also strongly encourage you to do this even if you are reluctant to stop or put on hold whatever you are currently doing. Please also encourage your friends who likely would be good at this to strongly consider it. If I am correct, the bottleneck in this space is holding back a lot of potentially vital action by many, many people who cannot be mobilized until they have a direction in which to push. (The framers need the foundation finished before they can start.) Anything you can contribute to advancing this field of research will have dramatic force multiplicative effects by “creating jobs” for dozens or hundreds of other researchers and implementers. You should also consider applying for one or both of the AI Macrostrategy roles at FHI if you see this before 29 Sept 2017.[13]
If you are unsure of your skill with disentanglement research, I would strongly encourage you to try to make some independent progress on a question of this type and see how you do. I realize this task itself is a bit under-defined, but that is also really part of the problem space itself, and the thing you are trying to test your skills with. Read around in the area, find something sticky you think you might be able to disentangle, and take a run at it.[14] If it goes well, whether or not you want to get into the space immediately, please send it in.
If you feel as though you might be a borderline candidate because of your relative inexperience with an area of in-demand expertise, you might consider trying to tool up a bit in the area, or applying for an internship. You might also err on the side of sending in a CV and cover letter just in case you are miscalibrated about your skill compared to other applicants. That said, again, do not think that you not being immediately employed is any reflection of your expected value in this space! Do not be discouraged, please stay interested, and continue to pursue this!
Preparation for mobilization
Being a contributor to this effort, as I imagine it, requires investing in yourself, your career, and the community, while positioning yourself well for action once the bottleneck unjams and and robust strategic direction is clearer.
I also highly recommend investing in building up your skills and career capital. This likely means excelling in school, going to graduate school, pursuing relevant internships, building up your CV, etc. Invest heavily in yourself. Additionally, stay in close communication with the EA community and keep up to date with opportunities in this space as they develop. (Several people are currently looking at starting programs specifically to on-ramp promising people into this space. This is one reason why signing up to the newsletters might be really valuable, so that opportunities are not missed.) To repeat myself from above, attend meet-ups and conferences, read the forums and newsletters, and be active in the community. Ideally this cause area will become a sub-community within EA and a strong self-reinforcing career network.
A good way to determine how to prepare and tool up for a career in either AI policy research or implementation is to look at the 80,000 Hours’ Guide to working in AI policy and strategy. Fields of study that are likely to be most useful for AI policy implementation include policy, politics and international relations, quantitative social sciences, and law.
Especially useful is finding roles of influence or importance, even with low probability but high expected value, within (especially the US federal) government.[15] Other potentially useful paths include non-profit management, project management, communications, public relations, grantmaking, policy advising at tech companies, lobbying, party and electoral politics and advising, political “staffing,” or research within academia, thinks tanks, or large corporate research groups especially in the areas of machine learning, policy, governance, law, defense, and related. A lot of information about the skills needed for various sub-fields within this area are available at 80,000 Hours.
Working in operations
Another important bottleneck in this space, though smaller in my estimation than the main bottleneck, is in institutional capacity within this currently tiny field. As mentioned already above, FHI needs to fill 4 separate operations roles at senior and junior levels in the next few months. (We are also in need of a temporary junior-level operations person immediately, if you are a UK citizen, consider getting in touch about this!)[16][17] Other organizations in this space have similar shortages. If you are an experienced manager, administrator, or similar, please consider applying or getting in touch for our senior roles. Alternatively, if you are freshly out of school, but have some proven hustle (especially proven by extensive extracurricular involvement, such as running projects or groups) and would potentially like to take a few years to advance this cause area before going to graduate school or locking in a career path, consider applying for a junior operations position, or get in touch.[18] Keep in mind that operations work at an organization like FHI can be a fantastic way to tool up and gain fluency in this space, orient yourself, discover your strengths and interests, and make contacts, even if one intends to move on to non-operations roles eventually.
Conclusion
The points I hope you can take away in approximate order of importance:
1) If you are interested in advancing this area, stay involved. Your expected value is extremely high, even if there are no excellent immediate opportunities to have a direct impact. Please join this community, and build up your capacity for future research and policy impact in this space.
2) If you are good at “disentanglement research” please get in touch, as I think this is our major bottleneck in the area of AI strategy research, and is preventing earlier and broader mobilization and utilization of our community’s talent.
3) If you are strong or moderately strong in key high-value areas, please also get in touch. (Perhaps err to the side of getting in touch if you are unsure.)
4) Excellent things to do to add value to this area, in expectation, include:
a) Investing in your skills and career capital, especially in high-value areas, such as studying in-demand topics.
b) Building a career in a position of influence (especially in government, global institutions, or in important tech firms.)
c) Helping to build up this community and its capacity, including building a strong and mutually reinforcing career network among people pursuing AI policy implementation from an EA or altruistic perspective.
5) Also of very high value is operations work and other efforts to increase institutional capacity.
Thank you for taking the time to read this. While it is very unfortunate that the current ground reality is, as far as I can tell, not well structured for immediate wide mobilization, I am confident that we can do a great deal of preparatory and positioning work as a community, and that with some forceful pushing on these bottlenecks, we can turn this enormous latent capacity into extremely valuable impact.
Let’s getting going “doing good together” as we navigate this difficult area, and help make a tremendous future!
Endnotes:
[1] For those of you not in this category who are interested in seeing why you might want to be, I recommend this short EA Global talk, the Policy Desiderata paper, and OpenPhil’s analysis. For a very short consideration on why the far future matters, I recommend this very short piece, and for a quick fun primer on AI as transformative I recommend this. Finally, once the hook is set, the best resource remains Superintelligence.
[2] Relatedly, I want to thank Miles Brundage, Owen Cotton-Barratt, Allan Dafoe, Ben Garfinkel, Roxanne Heston, Holden Karnofsky, Jade Leung, Kathryn Mecrow, Luke Muehlhauser, Michael Page, Tanya Singh, and Andrew Snyder-Beattie for their comments on early drafts of this post. Their input dramatically improved it. That said, again, they should not be viewed as endorsing anything in this. All mistakes are mine. All views are mine.)
[3] There are some interesting tentative taxonomies and definitions of the research space floating around. I personally find the following, quoting from a draft document by Allan Dafoe, especially useful:
AI strategy [can be divided into]... four complementary research clusters: the technical landscape, AI politics, AI governance, and AI policy. Each of these clusters characterizes a set of problems and approaches, within which the density of conversation is likely to be greater. However, most work in this space will need to engage the other clusters, drawing from and contributing high-level insights. This framework can perhaps be clarified by analogy to the problem of building a new city. The technical landscape examines the technical inputs and constraints to the problem, such as trends in the price and strength of steel. Politics considers the contending motivations of various actors (such as developers, residents, businesses), the possible mutually harmful dynamics that could arise and strategies for cooperating to overcome them. Governance involves understanding the ways that infrastructure, laws, and norms can be used to build the best city, and proposing ideal masterplans of these to facilitate convergence on a common good vision. The policy cluster involves crafting the actual policies to be implemented to build this city.
In a comment on this draft, Jade Leung pointed out what I think is an important implicit gap in the terms I am using, and highlights the importance of not treating these as either final, comprehensive, or especially applicable outside of this piece:
There seems to be a gap between [AI policy implementation] and 'AI strategy research' - where does the policy research feed in? I.e. the research required to canvas and analyse policy mechanisms by which strategies are most viably realised, prior to implementation (which reads here more as boots-on-the-ground alliance building, negotiating, resource distribution etc.)
[4] Definition lightly adapted from Allan Dafoe and Luke Muehlhauser.
[5]This idea owes a lot to conversations with Owen Cotton-Barratt, Ben Garfinkel, and Michael Page.
[6] I did not get a sense that any reviewer necessarily disagreed that this is a fair conceptualization of a type of research in this space, though some questioned its importance or centrality to current AI strategy research. I think the central disagreement here is on how many well-defined and concrete questions there are left to answer at the moment, how far answering them is likely to go in bringing clarity to this space and developing robust policy recommendations, and the relative marginal value of addressing these existing questions versus producing more through disentanglement of the less well defined areas.
[7] One commenter did not think these were a good sample of important questions. Obviously this might be correct, but in my opinion, these are absolutely among the most important questions to gain clarity on quickly.
[8] My personal opinion is that there are only three or maybe four robust policy-type recommendations we can make to governments at this time, given our uncertainty about strategy: 1) fund safety research, 2) commit to a common good principle, and 3) avoid an arms races. The fourth suggestion is both an extension of the other three and is tentative, but is something like: fund joint intergovernmental research projects located in relatively geopolitically neutral countries with open membership and a strong commitment to a common good principle.
I should note that this point was also flagged as potentially controversial by one reviewer. Additionally, Miles Brundage, quoted below, had some useful thoughts related to my tentative fourth suggestion:
In general, detailed proposals at this stage are unlikely to be robust due to the many gaps in our strategic and empirical knowledge. We "know" arms races are probably bad but there are many imaginable ways to avoid or mitigate them, and we don't really know what the best approach is yet. For example, launching big new projects might introduce various opportunities for leakage of information that weren't there before, and politicize the issue more than might be optimal as the details are worked out. As an example of an alternative, governments could commit to subsidizing (e.g. through money and hardware access) existing developers that open themselves up to inspections, which would have some advantages and some disadvantages over the neutrally-sited new project approach.
[9] This is an area with extreme and unusual enough considerations that it seems to break normal heuristics, or at least my normal heuristics. I have personally heard at least minimally plausible arguments made by thoughtful people that openness, antitrust law and competition, government regulation, advocating opposition to lethal autonomous weapons systems, and drawing wide attention to the problems of AI might be bad things, and invasive surveillance, greater corporate concentration, and weaker cyber security might be good things. (To be clear, these were all tentative, weak, but colourable arguments, made as part of exploring the possibility space, not strongly held positions by anyone.) I find all of these very counter-intuitive.
[10] A useful comment from a reviewer on this point: “These problems are related: We desperately need new institutions to house all the important AI strategy work, but we can't know what institutions to build until we've answer more of the foundational questions.”
[11] Credit for the heroic effort of assembling this goes mostly to Matthijs Maas. While I contributed a little, I have myself only read a tiny fraction of these.
[12] fhijobs@philosophy.ox.ac.uk.
[13] Getting in touch is a good action even if you can not or would rather not work at FHI. In my opinion, AI strategy researchers would ideally cluster in one or more research groups in order to advance this agenda as quickly as possible, but there is also some room for remote scholarship. (The AI strategy programme at FHI is currently trying to become the first of these “cluster” research groups, and we are recruiting in this area aggressively.)
[14] I’m personally bad enough at this, that my best advice is something like read around in the area, find a topic, and “do magic.” Accordingly, I will tag in Jade Leung again for a suggestion of what a “sensible, useful deliverable of 'disentanglement research' would look like”:
A conceptual model for a particular interface of the AI strategy space, articulating the sub-components, exogenous and endogenous variables of relevance, linkages etc.; An analysis of driver-pressure-interactions for a subset of actors; a deconstruction of a potential future scenario into mutually-exclusive-collectively-exhaustive (MECE) hypotheses.
Ben Garfinkel similarly volunteered to help clarify “by giving an example of a very broad question that seem[s] to require some sort of "detangling" skill:”
What does the space of plausible "AI development scenarios" look like, and how do their policy implications differ?
If AI strategy is "the study of how humanity can best navigate the transition to a world with advanced AI systems," then it seems like it ought to be quite relevant what this transition will look like. To point at two different very different possibilities, there might be a steady, piecemeal improvement of AI capabilities -- like the steady, piecemeal improvement of industrial technology that characterized the industrial revolution -- or there might be a discontinuous jump, enabled by sudden breakthroughs or an "intelligence explosion," from roughly present-level systems to systems that are more capable than humans at nearly everything. Or -- more likely -- there might be a transition that doesn't look much like either of these extremes.
Robin Hanson, Eliezer Yudkowsky, Eric Drexler, and others have all emphasized different visions of AI development, but have also found it difficult to communicate the exact nature of their views to one another. (See, for example, the Hanson-Yudkowsky "foom" debate.) Furthermore, it seems to me that their visions don't cleanly exhaust the space, and will naturally be difficult to define given the fact that so many of the relevant concepts--like "AGI," "recursive self-improvement," "agent/tool/goal-directed AI," etc.--are currently so vague.
I think it would be very helpful to have a good taxonomy of scenarios, so that we could begin to make (less ambiguous) statements like, "Policy X would be helpful in scenarios A and B, but not in scenario C," or, "If possible, we ought to try to steer towards scenario A and away from B." AI strategy is not there yet, though.
A related, "entangled" question is: Across different scenarios, what is the relationship between short and medium-term issues (like the deployment of autonomous weapons systems, or the automation of certain forms of cyberattacks) and the long-term issues that are likely to arise as the space of AI capabilities starts to subsume the space of human capabilities? For a given scenario, can these two (rough) categories of issues be cleanly "pulled apart"?
[15] 80,000 hours is experimenting with having a career coach specialize in this area, so you might consider getting in touch with them, or getting in touch with them again, if you might be interested in pursuing this route.
[16] fhijobs@philosophy.ox.ac.uk. This is how I snuck into FHI ~2 years ago, on a 3 week temporary contract as an office manager. I flew from the US on 4 days notice for the chance to try to gain fluency in the field. While my case of “working my way up from the mail room” is not likely to be typical (I had a strong CV), or necessarily a good model to encourage (see next footnote below) it is definitely the case that you can pick up a huge amount through osmosis at FHI, and develop a strong EA career network. This can set you up well for a wise choice of graduate programs or other career direction decisions.
[17] One reviewer cautioned against encouraging a dynamic in which already highly qualified people take junior operations roles with the expectation of transitioning directly into a research position, since this can create awkward dynamics and a potentially unhealthy institutional culture. I think this is probably, or at least plausibly, correct. Accordingly, while I think a junior operations role is great for building skills and orienting yourself, it should probably not be seen as a way of immediately transitioning to strategy research, but treated more as a method for turning post-college uncertainty into a productive plan, while also gaining valuable skills and knowledge, and directly contributing to very important work.
[18] Including locking in a career path continuing in operations. This really is an extremely high-value area for a career, and badly overlooked and neglected.