Chris Scammell

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Thanks Akash for this substantive reply!

Last minute we ended up cutting a section at the end of the document called "how does this [referring to civics, communications, coordination] all add up to preventing extinction?" It was an attempt to address the thing you're pointing at here:

I think there's something tricky about a document that has the vibe "this is the most important issue in the world and pretty much everyone else is approaching it the wrong way" and then pivots to "and the right way to approach it is to post on Twitter and talk to your friends."

Sadly we didn't feel we could get the point across well enough to make our timing cutoff for v1. A quick attempt at the same answer here (where higher context might make it easier to convey the point):

  • One way someone could be asking "does this all add up" is "are we going to survive?" And to that, our answer is mostly "hmm, that's not really a question we think about much. Whether we're going to make it or not is a question of fact, not opinion. We're just trying our best to work on what we think is optimal."
  • The other way someone could be asking "does this all add up" is "is this really a good plan?" That's a great question -- now we're talking strategy.
  • There are of course huge things that need to be done. A number of the authors support what's written in A Narrow Path, which offers very ambitious projects for AI policy. This is one good way to do strategy: start with the "full plan" and then use that as your map. If your plan can't keep us safe from superintelligence even in the ideal case where everything is implemented, then you need a new plan. (This is one of many of our RSP concerns -- what is the full plan? Most everything after "we detect the dangerous thing" still seems to require the level of intervention described in A Narrow Path)
  • Communication, coordination, and civics straightforwardly don't add up to A Narrow Path's suggestions. However, they are bottlenecks. (Why we think this is communicated somewhat in the doc, but there's a lot more to say on this). This is another good way to do strategy: look at things that are required in any good plan, and optimize for those. We don't see a winning world where AGI risks are not a global common knowledge, with people aware and concerned and acting at a scale far larger scale than today.
  • Where A Narrow Path does backprop from what's needed to actually stop superintelligence from being built, this doc presents more of "what can we do immediately, today." We try to tie the "What we can do immediately" to those bottlenecks that we think are needed in any plan. 

And yes - It was written more with a low-context AIS person in mind, and most of the high-context people we try to redirect towards "hey, reach out to us if you'd like to be more deeply involved." I think v2 should include more suggestions for bigger projects, that people with more context can pursue. Would love to hear your (or others') views on good projects. 

Also, great comment about the people who have done the most on communication so far. I really commend their efforts and writing something about this is definitely something v2 can include. 

On joining government organizations... I'm just speaking for myself on this one as I think my coauthors have different views on governance than myself. Yes-- this is good and necessary. Two caveats: 

  • Be willing to go against the grain. It seems the default path right now is for governments to support the same "reactive framework" that AGI companies are pushing. I'm worried about this, and I think we need people in government positions and advising them that are much more frank about the risks and unwilling to go for "convenient" solutions that fit the overton window. If the necessary safety regulations don't fit the current overton window, then the overton window has to change, not the regulation. Huge props to CAIS for SB1047 and whatever future bill efforts follow from them or others.
  • Be willing to help. Lots of people in government do care, and simply don't know what's going on. Try to be helpful to them before assuming they're antagonistic to x-risk. I've met lots of government people who are very amenable to a "hey, I'd be super happy to talk you through the technical details of AI, and explain why some people are worried about x-risk." Non-threatening, non-asking-for-something approaches really work. 

More to say later - to this and other comments in the thread. For now, taking the weekend to get a bit of rest :) 

Reply2211