All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI.

an excerpt:

[Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and that the next team would have to, as a matter of American national security — and, in this case, American economic strength and prosperity — address.

[Ezra Klein, NYT:] This gets to something I find frustrating in the policy conversation about A.I.

You start the conversation about how the most transformative technology — perhaps in human history — is landing in a two- to three-year time frame. And you say: Wow, that seems like a really big deal. What should we do?

That’s when things get a little hazy. Maybe we just don’t know. But what I’ve heard you kind of say a bunch of times is: Look, we have done very little to hold this technology back. Everything is voluntary. The only thing we asked was a sharing of safety data.

Now in come the accelerationists. Marc Andreessen has criticized you guys extremely straightforwardly.

Is this policy debate about anything? Is it just the sentiment of the rhetoric? If it’s so [expletive] big, but nobody can quite explain what it is we need to do or talk about — except for maybe export chip controls — are we just not thinking creatively enough? Is it just not time? Match the calm, measured tone of this conversation with our starting point.

I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why.

So it is entirely intellectually consistent to look at a transformative technology, draw the lines on the graph and say that this is coming pretty soon, without having the 14-point plan of what we need to do in 2027 or 2028.

 

Chip controls are unique in that this is a robustly good thing that we could do early to buy the space I talked about before. But I also think that we tried to build institutions, like the A.I. Safety Institute, that would set the new team up, whether it was us or someone else, for success in managing the technology.

Now that it’s them, they will have to decide as the technology comes on board how we want to calibrate this under regulation.

What kinds of decisions do you think they will have to make in the next two years?

...


 

New to LessWrong?

1.
^

At least, if we assume that the AGI labs' statements are accurate and truthful, and we are about to get to AGI and then ASI. On which I'm personally very skeptical. But I don't think a reasonable person can be skeptical enough to think that said decisive action isn't warranted. Not placing at least 10% on ASI by 2028 seems very poorly calibrated, and that's risk enough.

New Comment


12 comments, sorted by Click to highlight new comments since:

The Government Knows A.G.I. Is Coming

Well, clearly it doesn't, if the reaction is "uhhh what do we do? do we do anything? hurr durr" instead of "holy fuck we must nationalize/ban this before the AGI labs overthrow/kill us all". Even if you completely dismiss the idea of the AI taking over, the AGI companies are already literally saying they're going to build God and upend the existing balance of power. The only model under which no decisive, immediate action is warranted, is if you don't in fact appreciate the gravity of the situation.[1]

The government hasn't a clue.

  1. ^

    At least, if we assume that the AGI labs' statements are accurate and truthful, and we are about to get to AGI and then ASI. On which I'm personally very skeptical. But I don't think a reasonable person can be skeptical enough to think that said decisive action isn't warranted. Not placing at least 10% on ASI by 2028 seems very poorly calibrated, and that's risk enough.

This is probably correct, but also this is a report about the previous administration.

Normally, there is a lot of continuity in institutional knowledge between administrations, but this current transition is an exception, as the new admin has decided to deliberately break continuity as much as it can (this is very unusual).

And with the new admin, it's really difficult to say what they think. Vance publicly expresses an opinion worthy of Zuck, only more radical (gas pedal to the floor, forget about brakes). He is someone who believes at the same time that 1) AI will be extremely powerful, so all this emphasis is justified, 2) no safety measures at all are required, accelerate as fast as possible (https://www.lesswrong.com/posts/qYPHryHTNiJ2y6Fhi/the-paris-ai-anti-safety-summit).

Perhaps, he does not care about having a consistent world model, or he might think something different from what he publicly expresses. But he does sound like a CEO of a particularly reckless AI lab.

"The government" is too large, diffuse, and incoherent of a collective entity to be straightforwardly said to know or not know anything more complicated on most topics than a few headlines. I am certain there are individuals and teams and maybe agencies within the government that understand, and others that are so far from understanding they wouldn't know where to begin learning.

(epistemic status: controversial) Governments are conscious. They perceive events, filter perceptions for relevant events, form stable patterns of topics in attention ("thoughts"), respond to events in a way that shows intention to pursue goals, remember previous events, model other countries as pursuing goals, communicate with other countries. And in many of these cases the representatives involved do not share the beliefs they communicate. People are the neurons of a country and governments are their brains. 

I'm still confused enough about consciousness that I can only directionally and approximately agree, but I do agree with that.

It gets really fun when the same individual holds multiple titles with conflicting obligations, and ends up doing things like approve and then veto the same measure while wearing different hats. I also think it's unfortunate that we seem to have gotten way to intolerant of people doing this compared to a few decades or generations ago. We're less willing to separate individuals from the roles they're enacting in public life, and that makes many critical capabilities harder.

I disagree. The government can be said to definitively "know" something if the Eye of Sauron turns its gaze upon that thing.

The Eye of Sauron turned its gaze upon the Fellowship, and still didn't know that they'd actually try to destroy the ring instead of use it.

Less abstractly, I agree with you in principle, and I understand that many examples of the phenomena you referenced do exist. But there are also a large number of examples of government turning its eye on a thing, and with the best of intentions completely butchering whatever they wanted to do about it by completely failing to address the critical dynamics of the issue. And then not acknowledging or fixing the mistake, often for many decades. Some US examples, top of mind:

  • Banning supersonic flight so it doesn't matter if we solve sonic booms and thereby ensuring no one bothers doing further research.
  • Setting minimum standards for modular/mobile homes in 1976 and in the process making bizarre rules that led people see it as low-class, preventing what had been the fastest growing and most cost effective new housing segment from moving up to higher quality and larger size structures and also making many of them ineligible for traditional mortgages.
  • Almost everything about the Jones Act.
  • Almost everything about the NRC.
  • A large fraction of everything about the post-1960s FDA.
  • A substantial fraction of all zoning rules that increase costs, prevent the housing stock from meeting residents' needs, keep people from installing the best available materials/products/technologies to make their homes efficient/sustainable/comfortable, and don't actually contribute to safety or even community aesthetics and property values.

Guys, governments totally solve these. It just takes veeery long. But what do you expect? The thought processes of individual humans already take years (just think how long it takes for new technologies to be adopted) and that depite thoughts having a duration of 100ms. The duration of a single thought of a government is maybe a day. It is a wonder that governments can learn during a lifetime at all.

Yes, this is exactly what I do expect. There are many problems for which this is a sufficient or even good approach. There are other problems for which it is not. And there are lessons that (most?) governments seem incapable of learning (often for understandable or predictable reasons) even after centuries or millennia. This is why I specified that I don't think we can straightforwardly say the government does or does not know a complicated thing. Does the government know how to fight a war? Does it know how to build a city? How to negotiate and enact a treaty? I don't think that kind of question has a binary yes or no answer. I'd probably round to "no" if I had to choose, in the sense that I don't trust any particular currently existing government to reliably possess and execute that capability.

I don't know if I've ever commented this on LW, but elsewhere I've been known to jokingly-but-a-little-seriously say that once we solve mortality (in the versions of the future where humans are still in charge) we might want to require presidents to be at least a few centuries or a millennium old, because it's not actually possible for a baseline human to learn and consolidate all the necessary skills to be reliably good at the job in a single human lifetime.

Edited title to make it more clear this is a link post and that I don’t necessarily endorse the title.

(For reference, I understood that and my cutting language wasn't targeted at you.)