I think I am too much inside the DC policy world to understand why this is seen as a gaffe, really. Can you unpack why it's seen as a gaffe to them? In the DC world, by contrast, "yes, of course, this is a major national security threat, and no you of course could never use military capabilities to address it," would be a gaffe.
I mean, you saw people make fun of it when Eliezer said it, and then my guess is people conservatively assumed that this would generalize to the future. I've had conversations with people where they tried to convince me that Eliezer mentioning kinetic escalation was one of the worst things that anyone has ever done for AI policy, and they kept pointing to twitter threads and conversations where opponents made fun of it as evidence. I think there clearly was something real here, but I also think people really fail to understand the communication dynamics here.
I particularly appreciated its coverage of explicitly including conventional ballistic escalation as part of a sabotage strategy for datacenters
One thing I find very confusing about existing gaps between the AI policy community and the national security community is that natsec policymakers have already explicitly said that kinetic (i.e., blowing things up) responses are acceptable for cyberattacks under some circumstances, while the AI policy community seems to somehow unconsciously rule those sorts of responses out of the policy window. (To be clear: any day that American servicemembers go into combat is a bad day, I don't think we should choose such approaches lightly.)
I think a lot of this boils down to the fact that Sam Vimes is a copper, and sees poverty lead to precarity, and precarity lead to Bad Things Happening In Bad Neighborhoods. The most salient fact about Lady Sybil is that she never has to worry, never is on the rattling edge; she's always got more stuff, new stuff, old stuff, good stuff. Vimes (at that point in the Discworld series) isn't especially financially sophisticated, so he narrows it down to the piece he understands best, and builds a theory off of that.
You can definitely meet your own district's staff locally (e.g., if you're in Berkeley, Congresswoman Simon has an office in Oakland, Senator Padilla has an office in SF, and Senator Schiff's offices look not to be finalized yet but undoubtedly will include a Bay Area Office).
You can also meet most Congressional offices' staff via Zoom or phone (though some offices strongly prefer in-person meetings).
There is also indeed a meaningful rationalist presence in DC, though opinions vary as to whether the enclave is in Adams Morgan-Columbia Heights...
I think on net, there are relatively fewer risks related to getting governments more AGI-pilled vs. them continuing on their current course; governments are broadly AI-pilled even if not AGI/ASI-pilled and are doing most of the accelerating actions an AGI-accelerator would want.
The Trump administration (or, more specifically, the White House Office of Science and Technology Policy, but they are in the lead on most AI policy, it seems), are asking for comment on what their AI Action Plan should include. Literally anyone can comment on it. You should consider commenting on it, comments are due Saturday at 8:59pm PT/11:59pm ET via an email address. These comments will actually be read, and a large number of comments on an issue usually does influence any White House's policy. I encourage you to submit comment...
In the future, there should be some organization or some group of individuals in the LW community who raise awareness about these sorts of opportunities and offer content and support to ensure submissions from the most knowledgeable and relevant actors. This seems like a very low-hanging fruit and is something several groups I know are doing.
I think there's at least one missing one, "You wake up one morning and find out that a private equity firm has bought up a company everyone knows the name of, fired 90% of the workers, and says they can replace them with AI."
This essay earns a read for the line, "It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met" alone.
I would amplify to suggest that while many things are outside the Overton Window, policymakers are also aware of the concept of slowly moving the Overton Window, and if you explicitly admit you're doing that, they're usually on board (see, e.g., the conservative legal movement, the renewable energy movement, etc.). It's mostly only if you don't realize you're proposing that that you trigger a dismissive response.
Ok, so it seems clear that we are, for better or worse, likely going to try to get AGI to do our alignment homework.
Who has thought through all the other homework we might give AGI that is as good of an idea, assuming a model that isn't an instant-game-over for us? E.G., I remember @Buck rattling off a list of other ideas that he had in his The Curve talk, but I feel like I haven't seen the list of, e.g., "here are all the ways I would like to run an automated counterintelligence sweep of my organization" ideas.
(Yes, obviously, if the AI is sne...
@ryan_greenblatt is working on a list of alignment research applications. For control applications, you might enjoy the long list of control techniques in our original post.
Huh? "fighting election misinformation" is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
Without commenting on any strategic astronomy and neurology, it is worth noting that "bias", at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they're unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.
I am (sincerely!) glad that this is obvious to other people too and that they are talking about it already!
I mean, the literal best way to incentivize @Ricki Heicklen and me to do this again for LessOnline and Manifest 2025 is to create a prediction market on it, so I encourage you to do that
One point that maybe someone's made, but I haven't run across recently: if you want to turn AI development into a Manhattan Project, you will by-default face some real delays from the reorganization of private efforts into one big national effort. In a close race, you might actually see pressures not to do so, because you don't want to give up 6 months to a year on reorg drama -- so in some possible worlds, the Project is actually a deceleration move in the short term, even if it accelerates in the long term!
Ooh, interesting, thank you!
Incidentally, spurred by @Mo Putera's posting of Vernor Vinge's A Fire Upon The Deep annotations, I want to remind folks that Vinge's Rainbows End is very good and doesn't get enough attention, and will give you a less-incorrect understanding of how national security people think.
Oh, fair enough then, I trust your visibility into this. Nonetheless one Should Can Just Report Bugs
Note for posterity that there has been at least $15K of donations since this got turned back on -- You Can Just Report Bugs
Ok, but you should leave the donation box up -- link now seems to not work? I bet there would be at least several $K USD of donations from folks who didn't remember to do it in time.
I think you're missing at least one strategy here. If we can get folks to agree that different societies can choose different combos, so long as they don't infringe on some subset of rights to protect other societies, then you could have different societies expand out into various pieces of the future in different ways. (Yes, I understand that's a big if, but it reduces the urgency/crux nature of value agreement).
Note that the production function of the 10x really matters. If it's "yeah, we get to net-10x if we have all our staff working alongside it," it's much more detectable than, "well, if we only let like 5 carefully-vetted staff in a SCIF know about it, we only get to 8.5x speedup".
(It's hard to prove that the results are from the speedup instead of just, like, "One day, Dario woke up from a dream with The Next Architecture in his head")
Basic clarifying question: does this imply under-the-hood some sort of diminishing returns curve, such that the lab pays for that labor until it net reaches as 10x faster improvement, but can't squeeze out much more?
And do you expect that's a roughly consistent multiplicative factor, independent of lab size? (I mean, I'm not sure lab size actually matters that much, to be fair, it seems that Anthropic keeps pace with OpenAI despite being smaller-ish)
For the record: signed up for a monthly donation starting in Jan 2025. It's smaller than I'd like given some financial conservatism until I fill out my taxes, may revisit it later.
Everyone who's telling you there aren't spoilers in here is well-meaning, but wrong. But to justify why I'm saying that is also spoilery, so to some degree you have to take this on faith.
(Rot13'd for those curious about my justification: Bar bs gur znwbe cbvagf bs gur jubyr svp vf gung crbcyr pna, vs fhssvpvragyl zbgvingrq, vasre sne zber sebz n srj vfbyngrq ovgf bs vasbezngvba guna lbh jbhyq anviryl cerqvpg. Vs lbh ner gryyvat Ryv gung gurfr ner abg fcbvyref V cbyvgryl fhttrfg gung V cerqvpg Nfzbqvn naq Xbein naq Pnevffn jbhyq fnl lbh ner jebat.)
Opportunities that I'm pretty sure are good moves for Anthropic generally:
FWIW re: the Dario 2025 comment, Anthropic very recently posted a few job openings for recruiters focused on policy and comms specifically, which I assume is a leading indicator for hiring. One plausible rationale there is that someone on the executive team smashed the "we need more people working on this, make it happen" button.
In an ideal world (perhaps not reasonable given your scale), you would have some sort of permissions and logging against some sensitive types of queries on DM metadata. (E.G., perhaps you would let any Lighthaven team member see on the dashboard "rate of DMs from accounts <1 month in age compared to historic baseline" aggregate number, but "how many DMs has Bob (an account over 90 days old) sent to Alice" would require more guardrails.
Edit: to be clear, I am comfortable with you doing this without such logging at your current scale and think it is reasonable to do so.
I have a few weeks off coming up shortly, and I'm planning on spending some of it monkeying around AI and code stuff. I can think of two obvious tacks: 1. Go do some fundamentals learning on technical stuff I don't have hands-on technical experience with or 2. go build on new fun stuff.
Does anyone have particular lists of learning topics / syllabi / similar things like that that would be a good fit for "fairly familiar with the broad policy/technical space, but his largest shipped chunk of code is a few hundred lines of python" person like me?
Note also that this work isn't just papers; e.g., as a matter of public record MIRI has submitted formal comments to regulators to inform draft regulation based on this work.
(For those less familiar, yes, such comments are indeed actually weirdly impactful in the American regulatory system).
In a hypothetical, bad future where we have to do VaccinateCA 2.0 against e.g. bird flu, I personally wonder if "aggressively help people source air filters" would be a pre-vaccine-distribution-time step we would consider. (Not canon! Might be very wrong! Just idle musing)
Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help.
I am excited for this grounds of "we deserve to have nice things," though for boring financial planning reasons I am not sure whether I will donate additional funds prior to calendar year end or in calendar year 2025.
(Note that I made a similar statement in the past and then donated $100 to Lighthaven very shortly thereafter, so, like, don't attempt to reverse-engineer my financial status from this or whatever.)
Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help.
I think I'm also learning that people are way more interested in this detail than I expected!
I debated changing it to "203X" when posting to avoid this becoming the focus of the discussion but figured, "eh, keep it as I actually wrote it in the workshop" for good epistemic hygiene.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they'll have to divert huge chunks of their breeder reactors' output to pre-emptively nuking any site in the m...
Interesting! You should definitely think more about this and write it up sometime, either you'll change your mind about timelines till superintelligence or you'll have found an interesting novel argument that may change other people's minds (such as mine).
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the "fuck you, I'm doing Iran-Contra" folks. Which do you expect will get in control of such a program ? It's not immediately clear to me which ones would.
I think this is a (c) leaning (b), especially given that we're doing it in public. Remember, the Manhattan Project was a highly-classified effort and we know it by an innocuous name given to it to avoid attention.
Saying publicly, "yo, China, we view this as an all-costs priority, hbu" is a great way to trigger a race with China...
But if it turned out that we knew from ironclad intel with perfect sourcing that China was already racing (I don't expect this to be the case), then I would lean back more towards (c).
Thanks, looking forward to it! Please do let us folks who worked on A Narrow Path (especially me, @Tolga , and @Andrea_Miotti ) know if we can be helpful in bouncing around ideas as you work on the treaty proposal!
Is there a longer-form version with draft treaty langugage (even an outline)? I'd be curious to read it.
I think people opposing this have a belief that the counterfactual is "USG doesn't have LLMs" instead of "USG spins up its own LLM development effort using the NSA's no-doubt-substantial GPU clusters".
Needless to say, I think the latter is far more likely.
I think the thing that you're not considering is that when tunnels are more prevalent and more densely packed, the incentives to use the defensive strategy of "dig a tunnel, then set off a very big bomb in it that collapses many tunnels" gets far higher. It wouldn't always be infantry combat, it would often be a subterranean equivalent of indirect fires.
Ok, so Anthropic's new policy post (explicitly NOT linkposting it properly since I assume @Zac Hatfield-Dodds or @Evan Hubinger or someone else from Anthropic will, and figure the main convo should happen there, and don't want to incentivize fragmenting of conversation) seems to have a very obvious implication.
Unrelated, I just slammed a big AGI-by-2028 order on Manifold Markets.
Yup. The fact that the profession that writes the news sees "I should resign in protest" as their own responsibility in this circumstance really reveals something.
At LessOnline, there was a big discussion one night around the picnic tables with @Eliezer_Yudkovsky , @habryka , and some interlocutors from the frontier labs (you'll momentarily see why I'm being vague on the latter names).
One question was: "does DC actually listen to whistleblowers?" and I contributed that, in fact, DC does indeed have a script for this, and resigning in protest is a key part of it, especially ever since the Nixon years.
Here is a usefully publicly-shareable anecdote on how strongly this norm is embedded in national security decisi...
Also of relevance is the wave of resignations from the DC newspaper The Washington Post the past few days over Jeff Bezos suddenly exerting control.
Does "highest status" here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc?
I mean, functionally all of those things. (Well, minus the country dynamic. Everyone at this event I talked to was US, UK, or Canadian, which is all sorta one team for purposes of status dynamics at that event)
I was being intentionally broad, here. I am probably less interested for purposes of this particular post only in the question of "who controls the future" swerves and more about "what else would interested, agentic actors do" questions.
It is not at all clear to me that OpenPhil is the only org who feels this way -- I can think of several non-EA-ish charities that if they genuinely 100% believed "none of the people you care for will die of the evils you fight if you can just keep them alive for the next 90 days" would plausibly do some interestingly agentic stuff.
Oh, to be clear I'm not sure this is at all actually likely, but I was curious if anyone had explored the possibility conditional on it being likely
We're hiring at ControlAI for folks who walk to work on UK and US policy advocacy. Come talk to Congress and Parliament and stop risks from unsafe superintelligences! controlai.com/careers
(Admins: I don't tend to see many folks posting this sort of thing here, so feel free to nuke this post if not the sort of content you're going for. But given audience here, figured might be of interest)