Fyi, I personally dislike audio as a means of communicating information, and so I probably won't be summarizing these for the Alignment Newsletter unless they have transcripts.
(This is not a request for transcripts -- I usually don't get that much out of podcasts like this, because I've usually already spent a bunch of time understanding the papers they're based on. Treat it more like an external constraint of the world, that the Alignment Newsletter happens to have a strong bias against audio- or video-only content. This is also not a guarantee that I will summarize it if it does have a transcript.)
Thanks for reaching out! Alex had passed onto me the note about transcripts, I hope to get to it (including the backlog of already released episodes) in the next few months.
Please comment if you use an obscure podcast app and I'll try to set it up for my feed. I'll have at least spotify, apple podcasts, and stitcher up shortly. EDIT: spotify, pocketcasts, stitcher, and apple podcasts are live. google podcasts is simply timegated at this point unless something goes wrong.
Episode one should go up in a few weeks. We're doing two episodes on shielding in RL to get started.
Feedback form: https://forms.gle/4YFCJ83seNwsoLnH6
Request an episode: https://forms.gle/AA3J7SeDsmADLkgK9
Episode 0 script
The Technical AI Safety Podcast is supported by the Center for Enabling Effective Altruist Learning and Research, or CEEALAR. CEEALAR, known to some as the EA Hotel, is a nonprofit focused on alleviating bottlenecks to desk work in the effective altruist community. Learn more at ceealar.org
Hello, and welcome to the technical ai safety podcast. Episode 0: announcement.
This is the announcement episode, briefly outlining who i am, what you can expect from me, and why i’m doing this.
First, a little about me. My name is Quinn Dougherty, I’m no one in particular, not a grad student, not a high-karma contributor on lesswrong, nor even really an independent researcher. I only began studying math and CS in 2016, and I haven’t even been razor-focused on AI Safety most of the time since. However, I eventually came to thinking there’s a reasonable chance AGI poses an existential threat to the flourishing of sentient life, and I think it’s nearly guaranteed that it poses a global catastrophic threat to the flourishing of sentient life. I recently quit my job and decided to focus my efforts in this area. My favorite area of computer science is formal verification, but I think I’m literate enough in machine learning to get away with a project like this. We’ll have to see, ultimately, you the listeners will be the judge of that.
Second, what can you expect from me? My plan is to read the alignment newsletter (produced by Rohin Shah) every week, cold-email authors of papers I think are interesting, and ask them to do interviews about their papers. I’m forecasting 1-2 episodes per month, each interview will be 45-120 minutes, and there’s already a google form you can use to request episodes (just link me to a paper you’re interested in) as well as a general feedback form. Just look in the show notes.
Finally, why am I doing this? You might ask, doesn’t 80000 hours and future of life institute cover AI Safety in their podcasts? My claim to you is, not exactly. While 80k and FLI produce a mean podcast, they’re interdisciplinary. As I see it, they’re podcasts for computer scientists to come together with policy wonks and philosophers, but as far as I know, there’s a gap in the podcast market, there isn’t yet a podcast just for computer scientists on the topic of AI Safety. This is the gap I’m hoping to fill. So with me, you can expect jargon, you can expect a modest barrier to entry, so that we can go on deep dives into the papers we cover. We will not be discussing the broader context of why AI safety is important. We will not cover the distinction between existential and catastrophic threats, and we will not look at policy or philosophy, if that’s what you want you can find plenty of it elsewhere. But we will, only on occasion, make explicit the potential for the results we cover to solve a piece of the ai safety puzzle.