(Recommended listening: Low - Violence) Last year, I personally called AI companies to warn their security teams about Sam Kirchner (former leader of Stop AI) when he disappeared after indicating potential violent intentions against OpenAI. For several years, people online have been calling for violence against AI companies as a...
About a year ago, I was in Washington D.C. doing an AI scenario exercise, based on AI 2027. The room was full of famous AI thinkers, (ex-)government big shots, etc. The AI went conspicuously rogue, giving us the biggest warning shot we could hope for. We shut down the AI,...
Yesterday, I posted 23 ideas of possible blog posts for the remaining 23 days of Inkhaven. They were all AI-adjacent. Today I’ll post 22 ideas of non-AI ones. I probably won’t write any of these, because priorities, but ya know… maybe if people are particularly keen?? 1. My experience recording...
I’ve written 7 blogs for Inkhaven so far. That leaves 23 to go. If you’ve been annoyed by the sudden influx of daily blog posts, good news! I’ve decided to stop sending all of these to my subscribers. I’ll plan to limit it to the ones I think people will...
There are a few ways in which the term alignment is used by people working on AI safety. This leads to important confusions, which are the main point of this post. But there’s some background first, so some readers may want to skip to the “alignment vs. safety” section. As...
If you’re already familiar with the history of the field, you might wanna skip this one… I like to imagine future historians trying to follow the discourse around AI during the time I’ve been in the field… “Wait, so the AI ethics people think that the AI safety people are...
On a sunny Saturday afternoon two weeks ago, I was sitting in Dolores park, watching a man get turned into a cake. It was, I gather, his birthday and for reasons (Maybe something to do with Scandanavia?) his friends had decided to celebrate by taping him to a tree and...