by yams
1 min read

2

This is a special post for quick takes by yams. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
31 comments, sorted by Click to highlight new comments since:
[-]yams81

Please stop appealing to compute overhang. In a world where AI progress has wildly accelerated chip manufacture, this already-tenuous argument has become ~indefensible.

I tried to make a similar argument here, and I'm not sure it landed.  I think the argument has since demonstrated even more predictive validity with e.g. the various attempts to build and restart nuclear power plants, directly motivated by nearby datacenter buildouts, on top of the obvious effects on chip production.

[-]yams32

I've just read this post and the comments. Thank you for writing that; some elements of the decomposition feel really good, and I don't know that they've been done elsewhere.

I think discourse around this is somewhat confused, because you actually have to do some calculation on the margin, and need a concrete proposal to do that with any confidence.

The straw-Pause rhetoric is something like "Just stop until safety catches up!" The overhang argument is usually deployed (as it is in those comments) to the effect of 'there is no stopping.' And yeah, in this calculation, there are in fact marginal negative externalities to the implementation of some subset of actions one might call a pause. The straw-Pause advocate really doesn't want to look at that, because it's messy to entertain counter-evidence to your position, especially if you don't have a concrete enough proposal on the table to assign weights in the right places.

Because it's so successful against straw-Pausers, the anti-pause people bring in the overhang argument like an absolute knockdown, when it's actually just a footnote to double check the numbers and make sure your pause proposal avoids slipping into some arcane failure mode that 'arms' overhang scenarios. That it's received as a knockdown is reinforced by the gearsiness of actually having numbers (and most of these conversations about pauses are happening in the abstract, in the absence of, i.e., draft policy).

But... just because your interlocutor doesn't have the numbers at hand, doesn't mean you can't have a real conversation about the situations in which compute overhang takes on sufficient weight to upend the viability of a given pause proposal.

You said all of this much more elegantly here:

Arguments that overhangs are so bad that they outweigh the effects of pausing or slowing down are basically arguing that a second-order effect is more salient than the first-order effect.  This is sometimes true, but before you've screened this consideration off by examining the object-level, I think your prior should be against.

...which feels to me like the most important part. The burden is on folks introducing an argument from overhang risk to prove its relevance within a specific conversation, rather than just introducing the adversely-gearsy concept to justify safety-coded accelerationism and/or profiteering. Everyone's prior should be against actions Waluigi-ing, by default (while remaining alert to the possibility!).

To whom are are you talking?

[-]yams30

Folks using compute overhang to 4D chess their way into supporting actions that differentially benefit capabilities.

I'm often tempted to comment this in various threads, but it feels like a rabbit hole, it's not an easy one to convince someone of (because it's an argument they've accepted for years), and I've had relatively little success talking about this with people in person (there's some change I should make in how I'm talking about it, I think).

More broadly, I've started using quick takes to catalog random thoughts, because sometimes when I'm meeting someone for the first time, they have heard of me, and are mistaken about my beliefs, but would like to argue against their straw version. Having a public record I can point to of things I've thought feels useful for combatting this.

While I'm not a general fan of compute overhang, I do think that it's at least somewhat relevant in worlds where AI pauses are very close to when a system is able to automate at least the entire AI R&D process, if not the entire AI economy itself, and I do suspect realistic pauses imposed by governments will likely only come once a massive amount of people lose their jobs, which can create incentives to go to algorithmic progress, and even small algorithmic progress might immediately blow up the pause agreement crafted in the aftermath of many people losing their jobs.

[-]yams10

I think it would be very helpful to me if you broke that sentence up a bit more. I took a stab at it but didn't get very far.

Sorry for my failure to parse!

Basically, my statement in short terms is that conditional on AI pause happening because of massive job losses from AI that is barely unable to take-over the world, then even small saving in compute via better algorithms due to algorithmic research not being banned would incentivize more algorithmic research, which then lowers the compute enough to make the AI pause untenable and the AI takes over the world.

[-]yams10

So for this argument to be worth bringing up in some general context where a pause is discussed, the person arguing it should probably believe:

  1. We are far and away most likely to get a pause only as a response to unemployment.
  2. An AI that precipitates pause-inducing levels of unemployment is inches from automating AI R+D.
  3. The period between implementing the pause and massive algorithmic advancements is long enough that we're able to increase compute stock...
  4. ....but short enough that we're not able to make meaningful safety progress before algorithmic advancements make the pause ineffective (because, i.e., we regulated FLOPS and it just now takes 100x fewer FLOPS to build the dangerous thing).

I think the conjunct probability of all these things is low, and I think their likelihood is sensitive to the terms of the pause agreement itself. I agree that the design of a pause should consider a broad range of possibilities, and try to maximize its own odds of attaining its ends (Keep Everyone Alive).

I'm also not sure how this goes better in the no-pause world? Unless this person also has really high odds on multipolar going well and expects some Savior AI trained and aligned in the same length of time as the effective window of the theoretical pause to intervene? But that's a rare position among people who care about safety ~at all; it's kind of a George Hotz take or something...

(I don't think we disagree; you did flag that this as "...somewhat relevant in worlds where..." which is often code for "I really don't expect this to happen, but Someone Somewhere should hold this possibility in mind." Just want to make sure I'm actually following!)

I think 1 and 2 are actually pretty likely, but 3 and 4 is where I'm a lot less confident in actually happening.

A big reason for this is that I suspect one of the reasons people aren't reacting to AI progress is they assume it won't take their job, so it will likely require massive job losses for humans to make a lot of people care about AI, and depending on how concentrated AI R&D is, there's a real possibility that AI has fully automated AI R&D before massive job losses begin in a way that matters to regular people.

[-]yams10

Cool! I think we're in agreement at a high level. Thanks for taking the extra time to make sure you were understood.

In more detail, though:

I think I disagree with 1 being all that likely; there are just other things I could see happening that would make a pause or stop politically popular (i.e. warning shots, An Inconvenient Truth AI Edition, etc.), likely not worth getting into here. I also think 'if we pause it will be for stupid reasons' is a very sad take.

I think I disagree with 2 being likely, as well; probably yes, a lot of the bottleneck on development is ~make-work that goes away when you get a drop-in replacement for remote workers, and also yes, AI coding is already an accelerant // effectively doing gradient descent on gradient descent (RLing the RL'd researcher to RL the RL...) is intelligence-explosion fuel. But I think there's a big gap between the capabilities you need for politically worrisome levels of unemployment, and the capabilities you need for an intelligence explosion, principally because >30 percent of human labor in developed nations could be automated with current tech if the economics align a bit (hiring 200+k/year ML engineers to replace your 30k/year call center employee is only just now starting to make sense economically). I think this has been true of current tech since ~GPT-4, and that we haven't seen a concomitant massive acceleration in capabilities on the frontier (things are continuing to move fast, and the proliferation is scary, but it's not an explosion).

I take "depending on how concentrated AI R&D is" to foreshadow that you'd reply to the above with something like: "This is about lab priorities; the labs with the most impressive models are the labs focusing the most on frontier model development, and they're unlikely to set their sights on comprehensive automation of shit jobs when they can instead double-down on frontier models and put some RL in the RL to RL the RL that's been RL'd by the..."

I think that's right about lab priorities. However, I expect the automation wave to mostly come from middle-men, consultancies, what have you, who take all of the leftover ML researchers not eaten up by the labs and go around automating things away individually (yes, maybe the frontier moves too fast for this to be right, because the labs just end up with a drop-in remote worker 'for free' as long as they keep advancing down the tech tree, but I don't quite think this is true, because human jobs are human-shaped, and buyers are going to want pretty rigorous role-specific guarantees from whoever's selling this service, even if they're basically unnecessary, and the one-size-fits-all solution is going to have fewer buyers than the thing marketed as 'bespoke').

In general, I don't like collapsing the various checkpoints between here and superintelligence; there are all these intermediate states, and their exact features matter a lot, and we really don't know what we're going to get. 'By the time we'll have x, we'll certainly have y' is not a form of prediction that anyone has a particularly good track record making.

I think I disagree with 1 being all that likely; there are just other things I could see happening that would make a pause or stop politically popular (i.e. warning shots, An Inconvenient Truth AI Edition, etc.), likely not worth getting into here. I also think 'if we pause it will be for stupid reasons' is a very sad take.

I generally don't think the Inconvenient truth movie mattered that much for solving climate change, compared to technological solutions like renewable energy, and made the issue a little more partisan (though environmentalism/climate change was unusually partisan by then) and I think social movements to affect AI already had less impact on AI safety than technical work (in a broad sense) for reducing doom, and I expect this trend to continue.

I think warning shots could scare the public, but I worry that the level of warning shots necessary to clear AI is in a fairly narrow band, and I also expect AI control to have a reasonable probability of containing human-level scheming models that do work, so I wouldn't pick this at all.

I agree it's a sad take that "if we pause it will be for stupid reasons", but I also think this is the very likely attractor, if AI does become a subject that is salient in politics, because people hate nuance, and nuance matters way more than the average person wants to deal with on AI (For example, I think the second species argument critically misses important differences that make the human-AI relationship more friendly than the human-gorilla relationship, and that's without the subject being politicized).

To address this:

But I think there's a big gap between the capabilities you need for politically worrisome levels of unemployment, and the capabilities you need for an intelligence explosion, principally because >30 percent of human labor in developed nations could be automated with current tech if the economics align a bit (hiring 200+k/year ML engineers to replace your 30k/year call center employee is only just now starting to make sense economically). I think this has been true of current tech since ~GPT-4, and that we haven't seen a concomitant massive acceleration in capabilities on the frontier (things are continuing to move fast, and the proliferation is scary, but it's not an explosion).

I think the key crux is I believe that the unreliability of GPT-4 would doom any attempt to automate 30% of jobs, and I think at most 0-1% of jobs could be automated, and while in principle you could improve reliability without improving capabilities too much, I also don't think the incentives yet favor this option.

In general, I don't like collapsing the various checkpoints between here and superintelligence; there are all these intermediate states, and their exact features matter a lot, and we really don't know what we're going to get. 'By the time we'll have x, we'll certainly have y' is not a form of prediction that anyone has a particularly good track record making.

I agree with this sort of argument, and in general I am not a fan of collapsing checkpoints between today's AI and God AIs, which is a big mistake I think MIRI did, but my main claim is that the checkpoints would be illegible enough to the average citizen such that they don't notice the progress until it's too late, and that the reliability improvements will in practice also be coupled with capabilities improvements that matter to the AI explosion, but not very visible to the average citizen for the reason Garrison Lovely describes here:

https://x.com/GarrisonLovely/status/1866945509975638493

There's a vibe that AI progress has stalled out in the last ~year, but I think it's more accurate to say that progress has become increasingly illegible. Since 6/23, perf. on PhD level science questions went from barely better than random guessing to matching domain experts. 🧵

(More in the link above)

I think I get what you're saying... That the argument you dislike is, "we should rush to AGI sooner, so that there's less compute overhang when we get there."

I agree that that argument is a pretty bad one. I personally think that we are already so far into a compute overhang regime that that ship has sailed. We are using very inefficient learning algorithms, and will be able to run millions of inference instances of any model we produce.

Does this correspond with what you are thinking?

[-]yams10

I want to say yes, but I think this might be somewhat more narrow than I mean. It might be helpful if you could list a few other ways one might read my message, that seem similarly-plausible to this one.

Overhangs, overhangs everywhere. A thousand gleaming threads stretching backwards from the fog of the Future, forwards from the static Past, and ending in a single Gordian knot before us here and now. 

That knot: understanding, learning, being, thinking. The key, the source, the remaining barrier between us and the infinite, the unknowable, the singularity.

When will it break? What holds it steady? Each thread we examine seems so inadequate. Could this be what is holding us back, saving us from ourselves, from our Mind Children? Not this one, nor that, yet some strange mix of many compensating factors.

Surely, if we had more compute, we'd be there already? Or better data? The right algorithms? Faster hardware? Neuromorphic chips? Clever scaffolding? Training on a regress of chains of thought, to better solutions, to better chains of thought, to even better solutions?

All of these, and none of these. The web strains at the breaking point. How long now? Days? Months?

If we had enough ways to utilize inference-time compute, couldn't we just scale that to super-genius, and ask the genius for a more efficient solution? But it doesn't seem like that has been done. Has it been tried? Who can say.

Will the first AGI out the gate be so expensive it is unmaintainable for more than a few hours? Will it quickly find efficiency improvements?

Or will we again be bound, hung up on novel algorithmic insights hanging just out of sight. Who knows?

Surely though, surely.... surely rushing ahead into the danger cannot be the wisest course, the safest course? Can we not agree to take our time, to think through the puzzles that confront us, to enumerate possible consequences and proactively reduce risks?

I hope. I fear. I stare in awestruck wonder at our brilliance and stupidity so tightly intermingled. We place the barrel of the gun to our collective head, panting, desperate, asking ourselves if this is it. Will it be? Intelligence is dead, long live intelligence.

[-]robo00

In a world where AI progress has wildly accelerated chip manufacture

 

This world?

[-]yams10

Yes this world.

[-]yams30

The CCRU is under-discussed in this sphere as a direct influence on the thoughts and actions of key players in AI and beyond.

Land started a creative collective, alongside Mark Fisher, in the 90s. I learned this by accident, and it seems like a corner of intellectual history that’s at least as influential as ie the extropians.

If anyone knows of explicit connections between the CCRU and contemporary phenomena (beyond Land/Fisher’s immediate influence via their later work), I’d love to hear about them.

Yarvin was not part of the CCRU. I think Land and Yarvin only became associates post-CCRU.

[-]yams10

updated, thanks!

[-]yams30

Sometimes people give a short description of their work. Sometimes they give a long one.

I have an imaginary friend whose work I’m excited about. I recently overheard them introduce and motivate their work to a crowd of young safety researchers, and I took notes. Here’s my best reconstruction of what he’s up to:

"I work on median-case out-with-a-whimper scenarios and automation forecasting, with special attention to the possibility of mass-disempowerment due to wealth disparity and/or centralization of labor power. I identify existing legal and technological bottlenecks to this hypothetical automation wave, including a list of relevant laws in industries likely to be affected and a suite of evals designed to detect exactly which kinds of tasks are likely to be automated and when. 

"My guess is that there are economically valuable AI systems between us and AGI/TAI/ASI, and that executing on safety and alignment plans in the midst of a rapid automation wave is dizzyingly challenging. Thinking through those waves in advance feels like a natural extension of placing any weight at all on the structure of the organization that happens to develop the first Real Scary AI. If we think that the organizational structure and local incentives of a scaling lab matter, shouldn’t we also think that the societal conditions and broader class of incentives matter? Might they matter more? The state of the world just before The Thing comes on line, or as The Team that makes The Thing is considering making The Thing, has consequences for the nature of the socio-technical solutions that work in context.

"At minimum, my work aims to buy us some time and orienting-power as the stakes raise. I’d imagine my maximal impact is something like “develop automation timelines and rollout plans that you can peg AI development to, such that the state of the world and the state of the art AI technology advance in step, minimizing the collateral damage and chaos of any great economic shift.”

"When I’ve brought these concerns up to folks at labs, they’ve said that these matters get discussed internally, and that there’s at least some agreement that my direction is important, but that they can’t possibly be expected to do everything to make the world ready for their tech. I, perhaps somewhat cynically, think they’re doing narrow work here on the most economically valuable parts, but that they’re disinterested in broader coordination with public and private entities, since it would be economically disadvantageous to them.

"When I’ve brought these concerns up to folks in policy, they’ve said that some work like this is happening, but that it’s typically done in secret, to avoid amorphous negative externalities. Indeed, the more important this work is, the less likely someone is to publish it. There’s some concern that a robust and publicly available framework of this type could become a roadmap for scaling labs that helps them focus their efforts for near-term investor returns, possibly creating more fluid investment feedback loops and lowering the odds that disappointed investors back out, indirectly accelerating progress.

"Publicly available work on the topic is ~abysmal, painting the best case scenario as the most economically explosive one (most work of this type is written for investors and other powerful people), rather than pricing in the heightened x-risk embedded in this kind of destabilization. There's actually an IMF paper modeling automation from AI systems using math from the industrial revolution. Surely there's a better way here, and I hope to find it."

What text analogizing LLMs to human brains have you found most compelling?

Shameless self promotion: this one https://www.lesswrong.com/posts/ASmcQYbhcyu5TuXz6/llms-could-be-as-conscious-as-human-emulations-potentially

It circumvents object level question and instead looks at epistemic one.

This one is about broader direction in "how the things that happened change attitudes and opinions of people"

https://www.astralcodexten.com/p/sakana-strawberry-and-scary-ai

This one too, about consciousness in particular

https://dynomight.net/consciousness/

I think it's somewhat productive direction explored in these 3 posts, but it's not like very object level, more about epistemics of it all. I think you can look up how like LLM states overlap / predict / correspond with brain scans of people who engage in some tasks? I think there were a couple of paper on that.

E.g. here https://www.neuroai.science/p/brain-scores-dont-mean-what-we-think

[-]yams10

Sometimes people express concern that AIs may replace them in the workplace. This is (mostly) silly. Not that it won't happen, but you've gotta break some eggs to make an industrial revolution. This is just 'how economies work' (whether or not they can / should work this way is a different question altogether). 

The intrinsic fear of joblessness-resulting-from-automation is tantamount to worrying that curing infectious diseases would put gravediggers out of business.

There is a special case here, though: double digit unemployment (and youth unemployment, in particular) is a major destabilizing force in politics. You definitely don't want an automation wave so rapid that the jobless and nihilistic youth mount a civil war, sharply curtailing your ability to govern the dangerous technologies which took everyone's jobs in the first place.

As AI systems become more expensive, and more powerful, and pressure to deploy them profitably increases, I'm fairly concerned that we'll see a massive hollowing out of many white collar professions, resulting in substantial civil unrest, violence, chaos. I'm not confident that we'll get (i.e.) a UBI (or that it would meaningfully change the situation even if we did), and I'm not confident that there's enough inertia in existing economic structures to soften the blow.

The IMF estimates that current tech (~GPT 4 at launch) can automate ~30% of human labor performed in the US. That's a big, scary number. About half of these, they imagine, are the kinds of things you always want more of anyway, and that this complementarity will just drive production in that 15% of cases. The other 15%, though, probably just stop existing as jobs altogether (for various reasons, I think a 9:1 replacement rate is more likely than full automation, with current tech).

This mostly isn't happening yet because you need an ML engineer to commit Serious Time to automating away That Job In Particular. ML engineers are expensive, and usually not specced for the kind of client-facing work that this would require (i.e. breaking down tasks that are part of a job, knowing what parts can be automated, and via what mechanisms, be that purpose-built models, fine-tuning, a prompt library for a human operator, some specialized scaffolding...). There's just a lot of friction and lay-speak necessary to accomplish this, and it's probably not economically worth it for some subset of necessary parties (ML engineers can make more elsewhere than small business owners can afford to pay them to automate things away, for instance).

So we've got a bottleneck, and on the other side of it, this speculative 15% leap in unemployment. That 15% potential leap, though, is climbing as capabilities increase (this is tautologically true; "drop in replacement for a remote worker" is one major criteria used in discussions about AI progress).

I don't expect 15% unemployment to destabilize the government (Great Depression peak was 25%, which is a decent lower bound on 'potentially dangerous' levels of unemployment in the US). But I do expect that 15% powder keg to grow in size, and potentially cross into dangerous territory before it's lit.

Previously, I'd actually arrived at that 30% number myself (almost exactly one year ago), but I had initially expected:

  1. Labs would devote substantial resources to this automation, and it would happen more quickly than it has so far.
  2. All of these jobs were just on the chopping block (frankly, I'm not sure how much I buy the complementarity argument, but I am An Internet Crank, and they are the International Monetary Fund, so I'll defer to them).

These two beliefs made the situation look much more dire than I now believe it to be, but it's still, I claim, worth entertaining as A Way This Whole Thing Could Go, especially if we're hitting a capabilities plateau, and especially if we're doubling down on government intervention as our key lever in obviating x-risk.

[I'm not advocating for a centrally planned automation schema, to be clear; I think these things have basically never worked, but would like to hear counterexamples. Maybe just like... a tax on automation to help staunch the flow of resources into the labs and their surrogates, a restructuring of unemployment benefits and retraining programs and, before any of that, a more robust effort to model the economic consequences of current and future systems than the IMF report that just duplicates the findings of some idiot (me) frantically reviewing BLS statistics in the winter of 2023.]

[-]yams10

I (and maybe you) have historically underrated the density of people with religious backgrounds in secular hubs. Most of these people don't 'think differently', in a structural sense, from their forebears; they just don't believe in that God anymore. 

The hallmark here is a kind of naive enlightenment approach that ignores ~200 years of intellectual history (and a great many thinkers from before that period, including canonical philosophers they might claim to love/respect/understand). This type of thing.

They're no less tribal or dogmatic, or more critical, than the place they came from. They just vote the other way and can maybe talk about one or two levels of abstraction beyond the stereotype they identify against (although they can't really think about those levels).

You should still be nice to them, and honest with them, but you should understand what you're getting into.

The mere biographical detail of having a religious background or being religious isn't a strong mark against someone's thinking on other topics, but it is a sign you may be talking to a member of a certain meta-intellectual culture, and need to modulate your style. I have definitely had valuable conversations with people that firmly belong in this category, and would not categorically discourage engagement. Just don't be so surprised when the usual jutsu falls flat!

[-]yams10

I don't think I really understood what it meant for establishment politics to be divisive until this past election.

As good as it feels to sit on the left and say "they want you to hate immigrants" or "they want you to hate queer people", it seems similarly (although probably not equally?) true that the center left also has people they want you to hate (the religious, the rich, the slightly-more-successful-than-you, the ideologically-impure-who-once-said-a-bad-thing-on-the-internet).

But there's also a deeper, structural sense in which it's true.

Working on AIS, I've long hoped that we could form a coalition with all of the other people worried about AI, because a good deal of them just.. share (some version of) our concerns, and our most ambitious policy solutions (e.g. stopping development, mandating more robust interpretability and evals) could also solve a bunch of problems highlighted by the FATE community, the automation-concerned, etc etc. 

Their positions also have the benefit of conforming to widely-held anxieties ('I am worried AI will just be another tool of empire', 'I am worried I will lose my job for banal normie reasons that have nothing to do with civilizational robustness', 'I am worried AI's will cheaply replace human labor and do a worse job, enshittifying everything in the developed world'). We could generally curry popular support and favor, without being dishonest, by looking at the Venn diagram of things we want and things they want (which would also help keep AI policy from sliding into partisanship, if such a thing is still possible, given the largely right-leaning associations of the AIS community*).

For the next four years, at the very least, I am forced to lay this hope aside. That the EO contained language in service of the FATE community was, in hindsight, very bad, and probably foreseeably so, given that even moderate Republicans like to score easy points on culture war bullshit. Probably it will be revoked, because language about bias made it an easy thing for Vance to call "far left".

"This is ok because it will just be replaced."

Given the current state of the game board, I don't want to be losing any turns. We've already lost too many turns; setbacks are unacceptable.

"What if it gets replaced by something better?"

I envy your optimism. I'm also concerned about the same dynamic playing out in reverse; what if the new EO (or piece of legislation via whatever mechanism), like the old EO, contains some language that is (to us) beside the point, but nonetheless signals partisanship, and is retributively revoked or repealed by the next administration? This is why you don't want AIS to be partisan; partisanship is dialectics without teleology.

Ok, so structurally divisive: establishment politics has made it ~impossible to form meaningful coalitions around issues other than absolute lightning rods (e.g. abortion, immigration; the 'levers' available to partisan hacks looking to gin up donations). It's not just that they make you hate your neighbors, it's that they make you behave as though you hate your neighbors, lest your policy proposals get painted with the broad red brush and summarily dismissed.

I think this is the kind of observation that leads many experienced people interested in AIS to work on things outside of AIS, but with an eye toward implications for AI (e.g. Critch, A Ray). You just have these lucid flashes of how stacked the deck really is, and set about digging the channel that is, compared to the existing channels, marginally more robust to reactionary dynamics ('aligning the current of history with your aims' is maybe a good image).

Hopefully undemocratic regulatory processes serve their function as a backdoor for the sensible, but it's unclear how penetrating the partisanship will be over the next four years (and, of course, those at the top are promising that it will be Very Penetrating).

*I am somewhat ambivalent about how right-leaning AIS really is. Right-leaning compared to middle class Americans living in major metros? Probably. Tolerant of people with pretty far-right views? Sure, to a point. Right of the American center as defined in electoral politics (e.g. 'Republican-voting')? Usually not.

[-]yams10

Does anyone have examples of concrete actions taken by Open Phil that point toward their AIS plan being anything other than ‘help Anthropic win the race’?

Grants to Redwood Research, SERI MATS, NYU alignment group under Sam Bowman for scalable supervision, Palisade research, and many dozens more, most of which seem net positive wrt TAI risk.

[-]yams30

Many MATS scholars go to Anthropic (source: I work there).

Redwood I’m really not sure, but that could be right.

Sam now works at Anthropic.

Palisade: I’ve done some work for them, I love them, I don’t know that their projects so far inhibit Anthropic (BadLlama, which I’m decently confident was part of the cause for funding them, was pretty squarely targeted at Meta, and is their most impactful work to date by several OOM). In fact, the softer versions of Palisade’s proposal (highlighting misuse risk, their core mission), likely empower Anthropic as seemingly the most transparent lab re misuse risks.

I take the thrust of your comment to be “OP funds safety, do your research”. I work in safety; I know they fund safety.

I also know most safety projects differentially benefit Anthropic (this fact is independent of whether you think differentially benefiting Anthropic is good or bad).

If you can make a stronger case for any of the other of the dozens of orgs on your list than exists for the few above, I’d love to hear it. I’ve thought about most of them and don’t see it, hence why I asked the question.

Further: the goalpost is not ‘net positive with respect to TAI x-risk.’ It is ‘not plausibly a component of a meta-strategy targeting the development of TAI at Anthropic before other labs.’

Edit: use of the soldier mindset flag above is pretty uncharitable here; I am asking for counter-examples to a hypothesis I’m entertaining. This is the actual opposite of soldier mindset.

Apologies for the soldier mindset react, I pattern-matched to some more hostile comment. Communication is hard.

[-]yams10

Makes sense. Pretty sure you can remove it (and would appreciate that).