Wiki Contributions

Comments

Sorted by
Heramb43

Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.

This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).

And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.

We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

Heramb33

(Copying my quick take from the EA Forum)

I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren't aligned with theirs; US policymakers' incentive right now is to curb China's tech growth and fun trade war reasons, not pause AI.

This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.

It also makes China more likely to treat this as a tech race, which sets up interesting competitive race dynamics between the US and China, which I don't see talked about enough. 

Heramb30

Great post! With institutional design, would you have any advice on making it less abstract and increasing the value of such a proposal?

I cannot help but shrug off the feeling that just about anyone can whip up a structure/design which considers a couple of the stakeholders - what would a design which moves the needle have/be able to do? 

Heramb0-1

I agree with the concern about accidentally making it harder for X-risk regulations to be passed - probably also something to keep in mind for the part of the community that works on mitigating the misuse of AI. 
Here are some concerns specifically to this point which I have and am curious what people think about it: 

1. Policy Feasibility: Policymakers often operate on short-term electoral cycles, which inherently conflict with the long-term nature of x-risks. This temporal mismatch reduces the likelihood of substantial policy action. Therefore, advocacy strategies should focus on aligning x-risk mitigation with short-term political incentives. 

2. Incrementalism as Bayesian Updating: A step-by-step regulatory approach can serve as real-world Bayesian updating. Initial, simpler policies can act as 'experiments,' the outcomes of which can inform more complex policies. This iterative process increases the likelihood of effective long-term strategies. 

3. Balanced Multi-Tiered Regulatory Approach: Addressing immediate societal concerns or misuse (like deep fakes) seems necessary to any sweeping AI x-risk regulation since it seems to be in the Overton window and constituents' minds. In such a scenario, it would require significant political or social capital to pass something only aimed at x-risks but not about the other concerns. 

By establishing regulatory frameworks that address more immediate concerns based on multi-variate utility functions, we can probably lay the groundwork for more complex regulations aimed at existential risks. This is also why I think X-risk policy advocates come off as radical, robotic or "a bit out there" - they are so focused on talking about X-risk that they forget the more immediate or short-term human concerns. 

With X-risk regulation, there doesn't seem to be a silver bullet; these things will require intellectual rigour, pragmatic compromise and iterations themselves (also say hello to policy inertia). 
 

Heramb10

Very nice approach! I like the almost algorithmic flow; Other approaches i find important: a) talking to 2-4 people for 20 mins who are working on the problem but are not too far along (so the conversation can have an informal tone) b) talking to 1-2 people who don't have an idea about it ( this gives a bird's eye view) c) going to a conference to see what kind of language people use, what are the presentations at the cusp/current edge of development are up to; this also helps form connections (maybe for use of the first two steps)

Heramb20

Excellent work! I have also been pretty concerned about gaps in the global AI Governance ecosystem but a bit sceptical of how impactful focusing on developing countries would be. This essay reminds me that LMICs are still a part of the ecosystem, and one hole can cause a leaky bucket.

 

Particularly love the bit on incentivizing checks and balances instead of forcing it on countries!