My new model is that the President's interaction with science is largely to take concrete ideas floating around in the environment that are ready for their time, and push them over the edge into actually being built by the US private sector, or into actually substantially informing government policy. This is similar to the notion that scientific ideas come about when the environment is ready for them (Newton and Leibniz both discovering calculus at the same time).
Reminded me of this:
“Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable." (Milton Friedman)
I'd like to coin a new term for that thing which the US President has a lot of: coordination capital.
This seems to require some combination of:
____________________________________________________________________________________________
Some properties
Consider the priest Kalil mentions. He's able to declare people married because people think he is. It's the equilibrium, and everyone benefits from maintaining it. But if he tests his powers and start declaring strange marriages not endorsed by the local social norm, the equilibrium might shift. Similarly, if the president tries to rally companies around a stag hunt, but does so poorly and some choose rabbit, they're all more likley to choose rabbit in future.
The more plan executions you successfully coordinate, the more willing future projects will be to approach you with their plans.
If you have a Schelling coordination point, and someone finds it bad and declares they will build a new, better coordination point, there is risk that you'll end up not with two but with zero coordination points. Similarly, coordination capital is scarce and it can result in lock-in scenarios if held by the wrong entities.
____________________________________________________________________________________________
Background and implications
Part of the reason I want a term for this thing is that I've been experiencing a lack of this thing when working on coordination infrastructure for the EA and x-risk communities. I'm trying to build a forecasting platform and community to (among other things) build common knowledge of some timelines considerations, to coordinate around them.
However, to get people to use it, I can't just call up Holden Karnofsky, Nick Bostrom, and Nate Soares in order to kickstart the thing and make it a de facto Schelling point. Rather, I have to do some amount of "hustling", and things that don't scale -- finding people in the community with natural interest in stuff, reaching out to them personally, putting in legwork here and there to keep discussions going and add a missing piece to a quantitative model... and try to do this enough to hit some kind of escape velocity.
I don't have enough coordination capital, so I try to compensate by other means. Another example is Uber -- they're trying to move riders and drivers to a new equilibrium, they didn't have much coordination capital initially, and this requires them to burn a lot cash/free energy.
Writing this I'm a bit worried that all the leaders of the EA /x-risk communities are leaders of particular organizations with an object-level mission. They're primarily incentivised to achieve the organisation's mission, and there is no one who, like the president, simply serves to coordinate the community around the execution of plans. This suggests this function might be underutilised on the margin.
My sense is that the state of understanding of how transformative AI will be built and what impact it will have on the world is sufficiently low resolution and confused that we have no project or policy recommendations for the government, and will not be able to do so until we see further work that helps conceptualise this space.
To point to the type of thing I'm thinking about, here's a bunch of work that seems centrally useful to me (list is not exhaustive):
I'm interested to see if CSET, GovAI, OpenPhil, AI Impacts, or OpenAI are able to produce anything that helps conceptualise the strategy space (of the above teams that have produced public output, I've mostly not read it yet).
I highly recommend this piece by Dominic Cummings on how government works in practice. He is certainly optimistic that it could accomplish a lot, but my interpretation of the Kalil quotes is pretty much the opposite of yours. I'll listen to the podcast to get more context.
Thx! And what’s your impression - that the White House mostly focuses on innovation and execution of plans rather than coordinating other actors?
I largely agree with Kalil's assertion that the White House is mostly about coordinating other actors.
I think it is mainly a terrible failure in this regard, chiefly because it fails to account for the fact that coordinating other actors is a plan that requires execution. The 15-minutes-of-attention standard only works for cases where everyone doing what they would normally do except under the same banner this time is the right answer.
So what do we do when we need something that is currently wrong to change? What about cases where it's a difficult challenge that has an exacting standard for success? These are cases where the normal things other actors do is either specifically wrong, or not good enough; how is the White House supposed to coordinate them against themselves with a press conference and an executive order?
So while the coordinating-actors and dividing-attention frames are useful, both for us and from within the White House (or other country's leadership), I also feel like they could easily do a much better job approaching coordination as a strategy that requires execution, and chunking their attention to that end.
nods FWIW I think it’s plausible I exaggerated the levels of competence in government and could be persuaded to edit wording; my main intention with the post was to discuss what type signature the top of government in fact has.
The White House spends the vast majority of its resources putting out false press releases. My impression is that that's what Kalil did, too. Probably he shifted things in a positive direction, but the shape of the marginal effort doesn't have much to do with the shape of the total effort. That is, how much time he spent shaping the CDC actions vs NIH funding vs conferences of outsiders doesn't tells us much about how much of his useful actions fell in those categories. He had practically no direct power, so in a sense the CDC and NIH were outsiders to be coordinated, too.
Cummings burnt a lot of bridges by saying important negative things. I'm suspicious of Kalil sounding so positive. The first hour of the podcast gave me an extremely negative view of him, but then he mentioned a lot of trade-offs and strategies that seemed valuable regardless of the average level of government function. Still, I worry that he sold his soul to function in this environment and lost the ability to tell good projects from bad.
This matches the pattern for at least a few high-profile American technology successes, e.g. Apollo and the Manhattan Project.
I note that Kalil did not speak to results per se, but rather considered the mark of success being a lot of energy directed towards whatever the goal was. It is useful to think about all the things that are considered successes from the government perspective while having lots of operational failures, e.g. recent wars or the ACA.
The argument for the difference in these cases is largely that exceptional leaders were chosen to lead them; for Europe also had a version of the Apollo program which failed, and the Nazi bomb program failed. Not came in second, mind you - but failed completely in their aims. So who would be the Mueller or Groves for the AI safety program?
I find it strange to say that we don't have any plan. Surely the government could set up scholarships or research institute or some kind of committee to look into this?
There are reason why "create some kind of committee to look into this" is often jokingly referred to as a way to kill a proposal. You can say that about every topic.
I listened to the 80,000 Hours podcast with Tom Kalil, who spent 16 years as Deputy Director of the Office of Science and Technology Policy at the White House. Kalil seems skilled at evaluating concrete scientific plans to offer the president and finding the path of least resistance through government to effect those plans, though he is not himself someone with deep technical understanding of any single domain.
One key idea I took from the podcast was that his main use of the executive branch of government is as a coordination mechanism. I moved away from thinking of the President as an expert who makes decisions like a CEO, and much more as an individual with immense coordination power trying his best to take any concrete plans given to him and coordinate the country around executing on them. That is, not someone who comes up with plans, not someone who executes on the plans, but someone who coordinates people to execute the concrete plans that are waiting to be picked up and run with.
Below are relevant and very interesting quotes, followed by a few more updates I made listening to the podcast.
Key Quotes
Kalil also talks about in his role as Deputy Director of the Office of Science and Technology Policy, he helped raise the staff count from 40 to over 100 during the Obama administration. He just gets to hire people who are excited about an idea and want to make it happen, and then they make it happen using the coordination power of the executive office. Here's a prominent example:
The next quote is about how the core goal of the office of science and technology policy is to take the necessary steps to get the private sector to build new tech:
This final quote is an example of the coordination power of the President.
There is a lot more fascinating discussion in the interview, especially Kalil's comments on using financial prizes to incentivise science+tech in areas like education and poverty.
Updates
My new model is that the President's interaction with science is largely to take concrete ideas floating around in the environment that are ready for their time, and push them over the edge into actually being built by the US private sector, or into actually substantially informing government policy. This is similar to the notion that scientific ideas come about when the environment is ready for them (Newton and Leibniz both discovering calculus at the same time). There are executable plans floating around in the ether, and the President keeps getting handed them and sets them off. His department is not an originator of new ideas, it coordinates the execution of existing ones. (And there's a natural frame from which is this is the correct marginal use of attention from the President: compare 15 minutes per project versus spending a week becoming an expert in one and then executing it himself.)
I’ve updated positively on the tractability of gaining influence within the government and being able to use it on timescales of 4-8 years. (I expect I will likely make a further update when I read the blogposts of Dominic Cummings regarding UK politics, though not sure how strongly.) Overall I think influence in government, if you’re ambitious and well-connected and have a very concrete vision, is likely quite a real action one can take. I expect that from the perspective of government there is a lot of low hanging fruit to be picked.
I updated negatively on the usefulness of interacting with this part of government in the short-to-medium term. My sense is that the state of understanding of how transformative AI will be built and what impact it will have on the world is sufficiently low resolution and confused that we have no project or policy recommendations for the government, and will not be able to offer anything until we see further work that helps conceptualise this space. Listening to the podcast tells me that if you get 15 minutes to talk to the President about x-risk today, you are wasting his time, because we have no concrete plan that needs executing if only could coordinate major AI tech companies. We have no R&D projects that need funding. We have no nuanced AI-development policies for global powers to agree to. I’m pretty sure that there are people in this community who can coordinate Elon Musk and Demis Hassabis or whomever else, should we have an actionable plan, but the current state is that we have no plan to offer.