I recently concluded a fascinating interview with science communicator Rob Miles on the Futurati Podcast.

We cover:

  • His background in computer science, and how he became interested in the broader AI Safety debate.
  • The resources he recommends for newcomers to the field. The list includes Cambridge's AI Safety Fundamentals Course, AI Safety Support for those interested in an AI-safety career, and AI Safety Training, which is a compendium of different courses, conferences, etc.
  • How Yann LeCunn is just super confused and confusing.
  • The basic reasons for believing that the rise of smarter-than-human systems probably won't end well for human beings.
  • Various reasons for thinking there are still plenty more capabilities gains to be had from scaling.
  • Why SGD may not be much like evolution (but that's still no reason to think we're out of the woods.)
  • Why being uncertain about the future direction of a powerful new technology in no way means you're safe.
  • How the security mindset plays into an informs the conversation around AI Safety.
  • Why agency might emerge from large training runs, but why that might not end up mattering if people work as hard as they can to build agentic systems at the first opportunity.
  • The most promising approaches to AI alignment, including Anthropic's Constitutional AI, mechanistic interpretability, and robustness. 

As always, if you find this helpful at all, like the episode and share it. We'd like to devote more time to these kinds of interviews, and "number go up" is the most encouraging metric we can get. 

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 9:04 PM

Thanks for sharing. Rob's explanation in the end of this talk on why solving the alignment problem is the most intresting thing one can do now is spot on, but also to his point - not everybody will get it.