This announcement is a weird case for the LessWrong frontpage/personal distinction. I'm frontpaging it despite being an announcement because I expect the content of this podcast just to be pretty good material, the kind of world-modeling stuff I'd like to see on LessWrong.
For what it's worth, I consider the purpose of the episodes to be modelling the part of the world which is other people's models of things, which is a non-central case of "world-modelling".
Happy holidays! Some months ago, I launched a new podcast called The Filan Cabinet, but forgot to announce the podcast on this blog. Today, I rectify that mistake.
In some ways, the podcast is similar to AXRP - the AI X-risk Research Podcast. On that show, I interview AI x-risk researchers about their work, and try to bring their underlying views about AI x-risk research into the open: why do they think what they’re doing matters, and which research avenues do they find more or less promising?
The main difference is that in The Filan Cabinet, I talk about whatever I want to talk about, while still maintaining the goal of helping my audience understand my guests’ perspectives. To give you some sense of the show’s range, the first four episodes are about:
A secondary goal of the podcast is to give me practice at interviewing well, with the hope that this practice improves AXRP. With this in mind, I’ve optimized the production process for speed, meaning that I do less research before each interview, and that I do not release transcripts for episodes. With luck, this will enable me to release more frequently without sacrificing too much quality.
If you would like to listen to the show, you can search “The Filan Cabinet” on your podcast app of choice, or just click here to see it on Google Podcasts. You can also see announcements of new episodes on this Twitter account. You should see some new episodes being released in 2023.