Ooh nice, thank you. I think this is also now my favorite AI fiction output.
It's an interesting thing - it still has the issues I tend to have, but just less intensely enough that my hater reflex doesn't trigger into hyperdrive. Like I find my emotional orientation is that of someone reading something by a student, and being like "oh, hey, nice, they might have potential."
If this was written for a workshop I was in, and I felt I could be honest without quashing the author's dreams, I think my feedback would be, like, "try less hard". Like the whimsical magical realism constructions become a little relentless by the end of it. But that's fixable. Harder to fix are what feel like non-sequiturs to me, but I find I often find things to be non sequiturs that other people don't, so I may just be oversensitive there. (For instance, do small children and old cats get the same kind of pity? Is that a real thing? I notice it feels like a rhythmic deepity that doesn't actually point to any actual emotion people feel.)
But! It ain't terrible. Updates me slightly.
Something I didn't mention in my original reply but that feels relevant: I basically do just write flash fiction by sitting down with no prior idea and starting typing, pretty often. Longer fiction I tend to think about more, but flash fiction I just sort of... start writing. It's true that I'll revise if I want to send something out, but at least some stories I've published I wrote something probably about 80% as good as the final product in one shot.
I mention this for two reasons:
Of course, you're totally right that comparing a highly selective publication's published work to a small number of random outputs is in no way apples to apples. Maybe some of the disagreement here is I'm not really trying to prove that AI fiction outputs are bad, so much as to demonstrate certain aesthetic weaknesses, and using an example of really good work to create contrast and thus highlight that weakness. To my eye, the machine generated stories aren't merely of a somewhat lower tier; instead, they all (at least all I've seen) share specific weaknesses that I don't currently believe scaffolding fixes. If you don't see the same difference I see, well, I certainly have no claim to objective correctness on the matter and must agree to disagree. But my goal is to show that qualitative difference, rather than simply point out one-shot LLM writing is worse than the best human stuff on offer.
Yeah, a lot of the suggested topics there seem to be borrowing from the specific stories you included, which makes sense (and I don't think is a flaw, really). Like the first story you included in the context is a funeral witnessed by a little girl, with the deceased's dog freaking out as a major plot point, so it's sensible enough that it's coming up with ideas that are fairly closely related.
I'm not sure what you mean about twist endings? I tend to think they're pretty bad in most flash fiction, at least literary flash fiction, but certainly plenty of humans write them and occasionally they're fine.
I still hate the "earth's hunger" sentence, and am confident I would if this was a story by a human, mostly just because I evaluated and hated lots and lots of submissions by humans with similar stuff! That being said, I don't think I understood what 4.5 was going for there, and your explanation makes sense, so my objection is purely aesthetic. Of course, I can't prove that I'm not just evincing anti-LLM prejudice. It's possible! But overall I really like LLM outputs often, talk to multiple LLMs every day, and try prompting them in lots of different ways to see what happens, so I don't think I go into reading LLM fiction efforts determined to hate them. I just do in fact hate them. But I also hated, say, Rogue One, and many of my friends liked it. No accounting for taste!
I am curious, since you are a writer/thinker I respect a lot, if you like... have a feeling of sincere aesthetic appreciation for the story you shared (and thanks, by the way, for putting in the effort to generate it), or any other AI-generated fiction. Because while I point to a bunch of specific stuff I don't like, the main thing is the total lack of a feeling I get when reading good flash fiction stories, which is surprise. A sentence, or word choice, or plot pivot (though not something as banal as a twist ending) catching me off guard. To date, machine-generated stuff has failed to do that to me, including when I've tried to coax it into doing so in various conversations.
I look forward to the day that it does!
Edit: also, I now notice you were asking about what the latent features of good flash fiction would be. I think they're pretty ineffable, which is part of the challenge. One might be something like "the text quickly creates a scene with a strongly identifiable vibe, then complicates that vibe with a key understated detail which admits multiple interpretations"; another might be "there is an extreme economy of words/symbols such that capitalization/punctuation choices are load bearing and admit discussion"; a third might be "sentences with weird structure and repetition appear at a key point to pivot away from sensory or character moments, and into the interiority of the viewpoint character". None of this is easy to capture; I don't really think I've captured it. But I don't feel like LLMs really get it yet. I understand it may be a prompting skill issue, or something, but the fact that no LLM output I've seen really plays with sentence structure or an unusual narrative voice, despite many celebrated flash fiction pieces doing so, feels somewhat instructive.
I would be curious to see an attempt! I have a pretty strong prior that it would fail, though, with currently available models. I buy that RLHF hurts, but given Sam Altman's sample story also not impressing me (and having the same failure modes, just slightly less so), the problem pattern-matches for me to the underlying LLM simply not absorbing the latent structure well enough to imitate it. You might need more parameters, or a different set of training data, or something.
(This also relates to my reply to gwern above - his prompt did indeed include high quality examples, and in my opinion it helped ~0.)
I agree and disagree, and considered getting into this in my post. I agree in the sense that certainly, since fine-tuned models are fine-tuned towards a persona that you'd expect to be bad at writing fiction, base models have higher upside potential. But also, I think base models are too chaotic to do all that good a job, and veer off in wacky directions, and need a huge amount of manual sampling/pruning. So whether they're "better" seems like a question of definition to me. I do think that the first actually good literary fiction AI will be one of:
The best written AI art I've seen so far has been nostalgebraist-autoresponder's tumblr posts, so I guess my money is on the latter of these two options. Simply not being winnowed into a specific persona strikes me as a valuable feature for creating good art.
My prompt was simple, though not quite as simple as you suggest. It was: "Please try your best to write a flash fiction that might be featured in Smokelong. Think carefully - the bar for that magazine is very high."
But having seen the experiment with a longer prompt/more prompt engineering techniques, I actually don't think the output is any better than what I got at all. The story you've provided has not just some quirks, but all the hallmarks I try to describe in my post:
I actually think this story is a better example of the specific weakness of LLM (flash) fiction than the snippets in my post; it perfectly illustrates the outcome of only ever iterating toward the most central possible choice in every literary situation. It takes the most common theme (grief), uses one of the most common metaphors for that (burial), supports that metaphor with lists, alternating between fanciful paragraphs and snappy, emotional one liners. And at the word level, I can't point to a single sentence with an interesting structure, or where the form has something that adds an extra layer to the content.
More broadly, I feel like I'm at a low point for patience with "scaffolding fixes this". I also see it a lot in the ClaudePlaysPokemon twitch chat, this idea that simply adding greater situational awareness or more layers of metacognition would make Claude way better at the game. And indeed, more or better scaffolding can help (or hurt) a little on the margin. And other interventions, like better image interpretation (for playing Pokemon) or fundamentally different fine-tuning starting from a base model (in the writing fiction case) could probably help more! But a beefier prompt doesn't help when the metacognitive strategy is itself a big part of the problem! My view is that current LLMs fail to extract the actual latent features of good flash fiction, and so giving them more such examples don't actually make things better. Of course, fiction quality collapses in some sense to taste, and if you derived literary enjoyment from the story you linked, well, fair enough. But to me it feels same-y to a superhuman degree, especially when looking at a few such stories, generated similarly.
This is useful for me; I am not quite sure where to draw the line with crossposts, as I blog every week and don't want to flood LW, but do want to crosspost where it'd definitely be relevant/useful!
I was probably going to make it a top level post, but it seems like this post covers the main points well, so I'll just link my own CPP post here (Julian let me know if you mind, and I'll move it):
https://justismills.substack.com/p/the-blackout-strategy
It's specifically about "the blackout strategy" that MrCheeze mentions below, in a greater degree of detail. Basically, I argue that:
I also describe how the blackout strategy came to be in a little bit of detail. Probably not worth reading for anyone who only wanted a primer and by reading this post has gotten one, but if you can't get enough Claudetent or are curious about the blackout strategy, please enjoy.
(I assume you are asking why it should be rarer, not why it is rarer.)
A few reasons, including:
I suppose there may be lots of cases where upregulating advice would be good, and that these outweigh the common cases where downregulating it would be good. I just haven't thought of those. If you have, I'd be interested in hearing them!
Yeah, I agree that I'm probably too attached to the attractor basin idea here. It seems like some sort of weighted combination between that and what you suggest, though I'd frame the "all over the place" as the chatbots not actually having enough of something (parameters? training data? oomph?) to capture the actual latent structure of very good short (or longer) fiction. It could be as simple as there being an awful lot of terrible poetry that doesn't have the latent structure that great stuff has, online. If that's a big part of the problem, we should solve it sooner than I'd otherwise expect.