As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.
The sub-series runs approximately 3 hours and 40 minutes in total, during which Dr. Park and I discuss StakeOut.AI, a nonprofit which he cofounded along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position.
The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.
If you would like to investigate further into Dr. Park’s work, view his website, Google Scholar, or follow him on Twitter!
Since the interview is so long, I totally get wanting to jump right to the parts you are most interested in. To assist with this, I have included chapter timestamps in the show notes, which should allow you to quickly find the content you're looking for. In addition, will give a brief overview of each episode here, without going into too much detail. You can find even more sources on the Into AI Safety website.
Episode 1 | StakeOut.AI Milestones
Milestones
StakeOut.AI's AI governance scorecard [1] (go to Pg. 3)
Hollywood informational webinar
Amplifying public voice through open letters [2] [3] and regulation suggestions [4] [5]
Check out the Into AI Safety podcast on Spotify, Apple Podcasts, Amazon Music, YouTube Podcasts, and many other podcast listening platforms!
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.
The sub-series runs approximately 3 hours and 40 minutes in total, during which Dr. Park and I discuss StakeOut.AI, a nonprofit which he cofounded along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position.
The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.
If you would like to investigate further into Dr. Park’s work, view his website, Google Scholar, or follow him on Twitter!
Since the interview is so long, I totally get wanting to jump right to the parts you are most interested in. To assist with this, I have included chapter timestamps in the show notes, which should allow you to quickly find the content you're looking for. In addition, will give a brief overview of each episode here, without going into too much detail. You can find even more sources on the Into AI Safety website.
Episode 1 | StakeOut.AI Milestones
Episode 2 | The Next AI Battlegrounds
(if you're only gonna read one, read [13])
Episode 3 | Freeform
Acknowledgements
This work was made possible by AI Safety Camp
Special thanks to individuals that helped along the way:
Dr. Peter Park; Chase Precopia; Brian Penny; Leah Selman; Remmelt Ellen; Pete Wright