All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI.
an excerpt:
[Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and that the next team would have to, as a matter of American national security — and, in this case, American economic strength and prosperity — address.
[Ezra Klein, NYT:] This gets to something I find frustrating in the policy conversation about A.I.
You start the conversation about how the most transformative technology — perhaps in human history — is landing in a two- to three-year time frame. And you say: Wow, that seems like a really big deal. What should we do?
That’s when things get a little hazy. Maybe we just don’t know. But what I’ve heard you kind of say a bunch of times is: Look, we have done very little to hold this technology back. Everything is voluntary. The only thing we asked was a sharing of safety data.
Now in come the accelerationists. Marc Andreessen has criticized you guys extremely straightforwardly.
Is this policy debate about anything? Is it just the sentiment of the rhetoric? If it’s so [expletive] big, but nobody can quite explain what it is we need to do or talk about — except for maybe export chip controls — are we just not thinking creatively enough? Is it just not time? Match the calm, measured tone of this conversation with our starting point.
I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why.
So it is entirely intellectually consistent to look at a transformative technology, draw the lines on the graph and say that this is coming pretty soon, without having the 14-point plan of what we need to do in 2027 or 2028.
Chip controls are unique in that this is a robustly good thing that we could do early to buy the space I talked about before. But I also think that we tried to build institutions, like the A.I. Safety Institute, that would set the new team up, whether it was us or someone else, for success in managing the technology.
Now that it’s them, they will have to decide as the technology comes on board how we want to calibrate this under regulation.
What kinds of decisions do you think they will have to make in the next two years?
...
I'm still confused enough about consciousness that I can only directionally and approximately agree, but I do agree with that.
It gets really fun when the same individual holds multiple titles with conflicting obligations, and ends up doing things like approve and then veto the same measure while wearing different hats. I also think it's unfortunate that we seem to have gotten way to intolerant of people doing this compared to a few decades or generations ago. We're less willing to separate individuals from the roles they're enacting in public life, and that makes many critical capabilities harder.