aog

Sequences

Center for AI Safety Blog Posts
AI Safety Newsletter

Wikitag Contributions

Comments

Sorted by
aog40

Thanks for the heads up. I’ve edited the title and introduction to better indicate that this content might be interesting to someone even if they’re not looking for funding. 

aog20

Yeah I think that’d be reasonable too. You could talk about these clusters at many different levels of granularity, and there are tons I haven’t named. 

aog53

If we can put aside for a moment the question of whether Matthew Barnett has good takes, I think it's worth noting that this reaction reminds me of how outsiders sometimes feel about effective altruism or rationalism:

I guess I feel that his posts tend to be framed in a really strange way such that, even though there's often some really good research there, it's more likely to confuse the average reader than anything else and even if you can untangle the frames, I usually don't find worth it the time.

The root cause may be that there is too much inferential distance, too many differences of basic worldview assumptions, to easily have a productive conversation. The argument contained in any given post might rely on background assumptions that would take a long time to explain and debate. It can be very difficult to have a productive conversation with someone who doesn't share your basic worldview. That's one of the reasons that LessWrong encourages users to read foundational material on rationalism before commenting or posting. It's also why scalable oversight researchers like having places to talk to each other about the best approaches to LLM-assisted reward generation, without needing to justify each time whether that strategy is doomed from the start. And it's part of why I think it's useful to create scenes that operate on different worldview assumptions: it's worth working out the implications of specific beliefs without needing to justify those beliefs each time. 

Of course, this doesn't mean that Matthew Barnett has good takes. Maybe you find his posts confusing not because of inferential distance, but because they're illogical and wrong. Personally I think they're good, and I wouldn't have written this post if I didn't. But I haven't actually argued that here, and I don't really want to—that's better done in the comments on his posts. 

aog*5822

Shoutout to Epoch for creating its own intellectual culture. 

Views on AGI seem suspiciously correlated to me, as if many people's views are more determined by diffusion through social networks and popular writing, rather than independent reasoning. This isn't unique to AGI. Most individual people are not capable of coming up with useful worldviews on their own. Often, the development of interesting, coherent, novel worldviews benefits from an intellectual scene. 

What's an intellectual scene? It's not just an idea. Usually it has a set of complementary ideas, each of which make more sense with the others in place. Often there’s a small number of key thinkers who come up with many new ideas, and a broader group of people who agree with the ideas, further develop them, and follow their implied call to action. Scenes benefit from shared physical and online spaces, though they can also exist in social networks without a central hub. Sometimes they professionalize, offering full-time opportunities to develop the ideas or act on them. Members of a scene are shielded from pressure to defer to others who do not share their background assumptions, and therefore feel freer to come up with new ideas that would be unusual to outsiders, but make sense within the scene's shared intellectual framework. These conditions seem to raise the likelihood of novel intellectual progress. 

There are many examples of intellectual scenes within AI risk, at varying levels of granularity and cohesion. I've been impressed with Davidad recently for putting forth a set of complementary ideas around Safeguarded AI and FlexHEGs, and creating opportunities for people who agree with his ideas to work on them. Perhaps the most influential scenes within AI risk are the MIRI / LessWrong / Conjecture / Control AI / Pause AI cluster, united by high p(doom) and focus on pausing or stopping AI development, and the Constellation / Redwood / METR / Anthropic cluster, focused on prosaic technical safety techniques and working with AI labs to make the best of the current default trajectory. (Though by saying these clusters have some shared ideas / influences / spaces, I don't mean to deny the fact that most people within those clusters disagree on many important questions.) Rationalism and effective altruism are their own scenes, as are the conservative legal movement, social justice, new atheism, progress studies, neoreaction, and neoliberalism. 

Epoch has its own scene, with a distinct set of thinkers, beliefs, and implied calls to action. Matthew Barnett has written the most about these ideas publicly, so I'd encourage you to read his posts on these topics, though my understanding is that many of these ideas were developed with Tamay, Ege, Jaime, and others. Key ideas include long timelines, slow takeoff, eventual explosive growth, optimism about alignment, concerns about overregulation, concerns about hawkishness towards China, advocating the likelihood of AI sentience and desirability of AI rights, debating the desirability of different futures, and so on. These ideas motivate much of Epoch's work, as well as Mechanize. Importantly, the people in this scene don't seem to mind much that many others (including me) disagree with them. 

I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help. 

aog10

Curious what you think of arguments (1, 2) that AIs should be legally allowed to own property and participate in our economic system, thus giving misaligned AIs an alternative prosocial path to achieving their goals. 

aog20

How do we know it was 3x? (If true, I agree with your analysis) 

aog20

Do you take Grok 3 as an update on the importance of hardware scaling? If xAI used 5-10x more compute than any other model (which seems likely but not necessarily true?), then the fact that it wasn’t discontinuously better than other models seems like evidence against the importance of hardware scaling. 

aog*73

I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.

aog80

I'm very happy to see Meta publish this. It's a meaningfully stronger commitment to avoiding deployment of dangerous capabilities than I expected them to make. Kudos to the people who pushed for companies to make these commitments and helped them do so.

One concern I have with the framework is that I think the "high" vs. "critical" risk thresholds may claim a distinction without a difference.

Deployments are high risk if they provide "significant uplift towards execution of a threat scenario (i.e. significantly enhances performance on key capabilities or tasks needed to produce a catastrophic outcome) but does not enable execution of any threat scenario that has been identified as potentially sufficient to produce a catastrophic outcome." They are critical risk if they "uniquely enable the execution of at least one of the threat scenarios that have been identified as potentially sufficient to produce a catastrophic outcome." The framework requires that threats be "net new," meaning "The outcome cannot currently be realized as described (i.e. at that scale / by that threat actor / for that cost) with existing tools and resources."

But what then is the difference between high risk and critical risk? Unless a threat scenario is currently impossible, any uplift towards achieving it more efficiently also "uniquely enables" it under a particular budget or set of constraints. For example, it is already possible for an attacker to create bio-weapons, as demonstrated by the anthrax attacks - so any cost reductions or time savings for any part of that process uniquely enable execution of that threat scenario within a given budget or timeframe. Thus it seems that no model can be classified as high risk if it provides uplift on an already-achievable threat scenario—instead, it must be classified as critical risk.​​​​​​​​​​​​​​​​

Does that logic hold? Am I missing something in my reading of the document? 

aog52

Curious what you think of these arguments, which offer objections to the strategy stealing assumption in this setting, instead arguing that it's difficult for capital owners to maintain their share of capital ownership as the economy grows and technology changes. 

Load More