I write fiction. I'm also interested in how AI is going to impact the world. Among other things, I'd prefer that AI not lead to catastrophe. Let's imagine that I want to combine these two interests, writing fiction that explores the risks posed by AI. How should I go about doing so? More concretely, what ideas about AI might I try to communicate via fiction?
This post is an attempt to partially answer that question. It is also an attempt to invoke Cunningham's Law: I'm sure there will be things I miss or get wrong, and I'm hoping the comments section might illuminate some of these.
Holden's Messages
A natural starting point is Holden's recent... (read 903 more words →)
Thanks for the links, Karl. It wasn't my focus in this post, but I'm also a fan of stories that attempt to map out plausible possible futures, so your project sounds really interesting.