MIRI (soon) , MATS (former), Palisade (sometimes)
background in philosophy
addiction to language, nicotine
maker of geometry-inspired musical tools
I agree with this in the world where people are being epistemically rigorous/honest with themselves about their timelines and where there's a real consensus view on them. I've observed that it's pretty rare for people to make decisions truly grounded in their timelines, or to do so only nominally, and I think there's a lot of social signaling going on when (especially younger) people state their timelines.
I appreciate that more experienced people are willing to give advice within a particular frame ("if timelines were x", "if China did y", "if Anthropic did z", "If I went back to school", etc etc), even if they don't agree with the frame itself. I rely on more experienced people in my life to offer advice of this form ("I'm not sure I agree with your destination, but admit there's uncertainty, and love and respect you enough to advise you on your path").
Of course they should voice their disagreement with the frame (and I agree this should happen more for timelines in particular), but to gate direct counsel on urgent, object-level decisions behind the resolution of background disagreements is broadly unhelpful.
When someone says "My timelines are x, what should I do?", I actually hear like three claims:
Evaluation of the first claim is complicated and other people do a better job of it than I do so let's focus on the others.
"I believe timelines are x" is a pretty easy roll to disbelieve. Under relatively rigorous questioning, nearly everyone (particularly everyone 'career-advice-seeking age') will either say they are deferring (meaning they could just as easily defer to someone else tomorrow), or admit that it's a gut feel, especially for their ~90 percent year, and especially for more and more capable systems (this is more true of ASI than weak AGI, for instance, although those terms are underspecified). Still others will furnish 0 reasoning transparency and thus reveal their motivations to be principally social (possibly a problem unique to the bay, although online e/acc culture has a similar Thing).
"I am interested in behaving as though timelines are x" is an even easier roll to disbelieve. Very few people act on their convictions in sweeping, life-changing ways without concomitant benefits (money, status, power, community), including people within AIS (sorry friends).
With these uncertainties, piled on top of the usual uncertainties surrounding timelines, I'm not sure I'd want anyone to act so nobly as to refuse advice to someone with different timelines.
If Alice is a senior AIS professional who gives advice to undergrads at parties in Berkeley (bless her!), how would her behavior change under your recommendation? It sounds like maybe she would stop fostering a diverse garden of AIS saplings and instead become the awful meme of someone who just wants to fight about a highly speculative topic. Seems like a significant value loss.
Their timelines will change some other day; everyone's will. In the meantime, being equipped to talk to people with a wide range of safety-concerned views (especially for more senior, or just Older people), seems useful.
harder to converge
Converge for what purpose? It feels like the marketplace of ideas is doing an ok job of fostering a broad portfolio of perspectives. If anything, we are too convergent and, as a consequence, somewhat myopic internally. Leopold mind-wormed a bunch of people until Tegmark spoke up (and that only somewhat helped). Few thought governance was a good idea until pretty recently (~3 years ago), and it would be going better if those interested in the angle weren't shouted down so emphatically to begin with.
If individual actors need to cross some confidence threshold in order to act, but the reasonable confidence interval is in fact very wide, I'd rather have a bunch of actors with different timelines, which roughly sum to the shape of the reasonable thing*, then have everyone working on the same overconfident assumption that later comes back to bite us (when we've made mistakes in the past, this is often why).
*Which is, by the way, closer to flat than most people's individual timelines
I don't think I really understood what it meant for establishment politics to be divisive until this past election.
As good as it feels to sit on the left and say "they want you to hate immigrants" or "they want you to hate queer people", it seems similarly (although probably not equally?) true that the center left also has people they want you to hate (the religious, the rich, the slightly-more-successful-than-you, the ideologically-impure-who-once-said-a-bad-thing-on-the-internet).
But there's also a deeper, structural sense in which it's true.
Working on AIS, I've long hoped that we could form a coalition with all of the other people worried about AI, because a good deal of them just.. share (some version of) our concerns, and our most ambitious policy solutions (e.g. stopping development, mandating more robust interpretability and evals) could also solve a bunch of problems highlighted by the FATE community, the automation-concerned, etc etc.
Their positions also have the benefit of conforming to widely-held anxieties ('I am worried AI will just be another tool of empire', 'I am worried I will lose my job for banal normie reasons that have nothing to do with civilizational robustness', 'I am worried AI's will cheaply replace human labor and do a worse job, enshittifying everything in the developed world'). We could generally curry popular support and favor, without being dishonest, by looking at the Venn diagram of things we want and things they want (which would also help keep AI policy from sliding into partisanship, if such a thing is still possible, given the largely right-leaning associations of the AIS community*).
For the next four years, at the very least, I am forced to lay this hope aside. That the EO contained language in service of the FATE community was, in hindsight, very bad, and probably foreseeably so, given that even moderate Republicans like to score easy points on culture war bullshit. Probably it will be revoked, because language about bias made it an easy thing for Vance to call "far left".
"This is ok because it will just be replaced."
Given the current state of the game board, I don't want to be losing any turns. We've already lost too many turns; setbacks are unacceptable.
"What if it gets replaced by something better?"
I envy your optimism. I'm also concerned about the same dynamic playing out in reverse; what if the new EO (or piece of legislation via whatever mechanism), like the old EO, contains some language that is (to us) beside the point, but nonetheless signals partisanship, and is retributively revoked or repealed by the next administration? This is why you don't want AIS to be partisan; partisanship is dialectics without teleology.
Ok, so structurally divisive: establishment politics has made it ~impossible to form meaningful coalitions around issues other than absolute lightning rods (e.g. abortion, immigration; the 'levers' available to partisan hacks looking to gin up donations). It's not just that they make you hate your neighbors, it's that they make you behave as though you hate your neighbors, lest your policy proposals get painted with the broad red brush and summarily dismissed.
I think this is the kind of observation that leads many experienced people interested in AIS to work on things outside of AIS, but with an eye toward implications for AI (e.g. Critch, A Ray). You just have these lucid flashes of how stacked the deck really is, and set about digging the channel that is, compared to the existing channels, marginally more robust to reactionary dynamics ('aligning the current of history with your aims' is maybe a good image).
Hopefully undemocratic regulatory processes serve their function as a backdoor for the sensible, but it's unclear how penetrating the partisanship will be over the next four years (and, of course, those at the top are promising that it will be Very Penetrating).
*I am somewhat ambivalent about how right-leaning AIS really is. Right-leaning compared to middle class Americans living in major metros? Probably. Tolerant of people with pretty far-right views? Sure, to a point. Right of the American center as defined in electoral politics (e.g. 'Republican-voting')? Usually not.
I think the key missing piece you’re pointing at (making sure that our interpretability tools etc actually tell us something alignment-relevant) is one of the big things going on in model organisms of misalignment (iirc there’s a step that’s like ‘ok, but if we do interpretability/control/etc at the model organism does that help?’). Ideally this type of work, or something close to it, could become more common // provide ‘evals for our evals’ // expand in scope and application beyond deep deception.
If that happened, it seems like it would fit the bill here.
Does that seem true to you?
I like this post but I think redwood has varied some on whether control is for getting alignment work out of AIs vs getting generally good-for-humanity work out of them and pushing for a pause once they reach some usefulness/danger threshold (eg well before super intelligence).
[based on my recollection of Buck seminar in MATS 6]
Makes sense. Pretty sure you can remove it (and would appreciate that).
Many MATS scholars go to Anthropic (source: I work there).
Redwood I’m really not sure, but that could be right.
Sam now works at Anthropic.
Palisade: I’ve done some work for them, I love them, I don’t know that their projects so far inhibit Anthropic (BadLlama, which I’m decently confident was part of the cause for funding them, was pretty squarely targeted at Meta, and is their most impactful work to date by several OOM). In fact, the softer versions of Palisade’s proposal (highlighting misuse risk, their core mission), likely empower Anthropic as seemingly the most transparent lab re misuse risks.
I take the thrust of your comment to be “OP funds safety, do your research”. I work in safety; I know they fund safety.
I also know most safety projects differentially benefit Anthropic (this fact is independent of whether you think differentially benefiting Anthropic is good or bad).
If you can make a stronger case for any of the other of the dozens of orgs on your list than exists for the few above, I’d love to hear it. I’ve thought about most of them and don’t see it, hence why I asked the question.
Further: the goalpost is not ‘net positive with respect to TAI x-risk.’ It is ‘not plausibly a component of a meta-strategy targeting the development of TAI at Anthropic before other labs.’
Edit: use of the soldier mindset flag above is pretty uncharitable here; I am asking for counter-examples to a hypothesis I’m entertaining. This is the actual opposite of soldier mindset.
updated, thanks!
The CCRU is under-discussed in this sphere as a direct influence on the thoughts and actions of key players in AI and beyond.
Land started a creative collective, alongside Mark Fisher, in the 90s. I learned this by accident, and it seems like a corner of intellectual history that’s at least as influential as ie the extropians.
If anyone knows of explicit connections between the CCRU and contemporary phenomena (beyond Land/Fisher’s immediate influence via their later work), I’d love to hear about them.
I (and maybe you) have historically underrated the density of people with religious backgrounds in secular hubs. Most of these people don't 'think differently', in a structural sense, from their forebears; they just don't believe in that God anymore.
The hallmark here is a kind of naive enlightenment approach that ignores ~200 years of intellectual history (and a great many thinkers from before that period, including canonical philosophers they might claim to love/respect/understand). This type of thing.
They're no less tribal or dogmatic, or more critical, than the place they came from. They just vote the other way and can maybe talk about one or two levels of abstraction beyond the stereotype they identify against (although they can't really think about those levels).
You should still be nice to them, and honest with them, but you should understand what you're getting into.
The mere biographical detail of having a religious background or being religious isn't a strong mark against someone's thinking on other topics, but it is a sign you may be talking to a member of a certain meta-intellectual culture, and need to modulate your style. I have definitely had valuable conversations with people that firmly belong in this category, and would not categorically discourage engagement. Just don't be so surprised when the usual jutsu falls flat!