tlevin

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. 

Not to be confused with the user formerly known as trevor1.

Wikitag Contributions

Comments

Sorted by
tlevin10

Sequence thinking can totally generate that, but it seems like it is also prone to this kind of stylized simple model where you wind up with too few arrows in your causal graph and then inaccurately conclude that some parts are necessary and others aren't helpful.

tlevin313

Biggest disagreement between the average worldview of people I met with at EAG and my own is something like "cluster thinking vs sequence thinking," where people at EAG were often like "but even if we get this specific policy/technical win, doesn't it not matter unless you also have this other, harder thing?" and I was often more like, "Well, very possibly we won't get that other, harder thing, but still seems really useful to get that specific policy/technical win, here's a story where we totally fail on that first thing and the second thing turns out to matter a ton!"

tlevin30

Agreed, I think people should apply a pretty strong penalty when evaluating a potential donation that has or worsens these dynamics. There are some donation opportunities that still have the "major donors won't [fully] fund it" and "I'm advantaged to evaluate it as an AIS professional" without the "I'm personal friends with the recipient" weirdness, though -- e.g. alignment approaches or policy research/advocacy directions you find promising that Open Phil isn't currently funding that would be executed thousands of miles away.

tlevin52

Depends on the direction/magnitude of the shift!

I'm currently feeling very uncertain about the relative costs and benefits of centralization in general. I used to be more into the idea of a national project that centralized domestic projects and thus reduced domestic racing dynamics (and arguably better aligned incentives), but now I'm nervous about the secrecy that would likely entail, and think it's less clear that a non-centralized situation inevitably leads to a decisive strategic advantage for the leading project. Which is to say, even under pretty optimistic assumptions about how much such a project invests in alignment, security, and benefit-sharing, I'm pretty uncertain that this would be good, and with more realistic assumptions I probably lean towards it being bad. But it super depends on the governance, the wider context, how a "Manhattan Project" would affect domestic companies and China's policymaking, etc.

(I think a great start would be not naming it after the Manhattan Project, though. It seems path dependent, and that's not a great first step.)

tlevin32

It's not super clear whether from a racing perspective having an equal number of nukes is bad. I think it's genuinely messy (and depends quite sensitively on how much actors are scared of losing vs. happy about winning vs. scared of racing). 


Importantly though, once you have several thousand nukes the strategic returns to more nukes drop pretty close to zero, regardless of how many your opponents have, while if you get the scary model's weights and then don't use them to push capabilities even more, your opponent maybe gets a huge strategic advantage over you. I think this is probably true, but the important thing is whether the actors think it might be true.

In-general I think it's very hard to predict whether people will overestimate or underestimate things. I agree that literally right now countries are probably underestimating it, but an overreaction in the future also wouldn't surprise me very much (in the same way that COVID started with an underreaction, and then was followed by a massive overreaction).

Yeah, good point.

tlevin30

Yeah doing it again it works fine, but it was just creating a long list of empty bullet points (I also have this issue in GDocs sometimes)

tlevin32

Gotcha. A few disanalogies though -- the first two specifically relate to the model theft/shared access point, the latter is true even if you had verifiable API access: 

  1. Me verifying how many nukes you have doesn't mean I suddenly have that many nukes, unlike AI model capabilities, though due to compute differences it does not mean we suddenly have the same time distance to superintelligence. 
  2. Me having more nukes only weakly enables me to develop more nukes faster, unlike AI that can automate a lot of AI R&D.
  3. This model seems to assume you have an imprecise but accurate estimate of how many nukes I have, but companies will probably be underestimating the proximity of each other to superintelligence, for the same reason that they're underestimating their own proximity to superintelligence, until it's way more salient/obvious.
tlevin61

In general, we should be wary of this sort of ‘make things worse in order to make things better.’ You are making all conversations of all sizes worse in order to override people’s decisions.

Glad to be included in the roundup, but two issues here.

First, it's not about overriding people's decisions; it's a collective action problem. When the room is silent and there's a single group of 8, I don't actually face a choice of a 2- or 3-person conversation; it doesn't exist! The music lowers the costs for people to split into smaller conversations, so the people who prefer those now have better choices, not worse.

Second, this is a Simpson's Paradox-related fallacy: you are indeed making all conversations more difficult, but in my model, smaller conversations are much better, so by making conversations of all sizes slightly to severely worse but moving the population to smaller conversations, you're still improving the conversations on net.

tlevin10

Also - I'm not sure I'm getting the thing where verifying that your competitor has a potentially pivotal model reduces racing?

tlevin30

The "how do we know if this is the most powerful model" issue is one reason I'm excited by OpenMined, who I think are working on this among other features of external access tools

Load More