I'm not sure I fully agree. Sure, if I can simulate draws from my posterior beliefs about reality, then I can reason from what those draws tell me. Scientists do this all the time when they simulate from a posterior distribution, and we hardly call this "fictional evidence" although, in a weak sense, it is. If that is all you are claiming, then I agree with you.
But when we create narratives to account for the evidence we see, we're almost never doing so by strictly drawing from a well-confirmed posterior. We're almost always doing whatever is simplest and whatever gratifies our immediate urges for availability and confirmation. In this sense, how can you really trust the narratives you generate in fiction? Sure, they might seem plausible to you, but how do you know? Have you really gone and made a probability calculation describing the whole chain of propositions necessary for your fictional narrative to be true? Almost surely not.
Therein lies great danger when you say something like: "... but if it the story is possible, it might as well have happened, since everything can happen once." I suggest that your Starbucks/Main Street example is a bad one, since these are rather specific details over which a given person's daily experience is likely to produce an accurate posterior distribution. Most instances of narrative fallacy are not this simple, and it would be a little disingenuous to claim that examples like that somehow lend validity to the entire practice of generalizing from fictional evidence.
More to the point, you should consider the LW post The Logical Fallacy of Generalizing from Fictional Evidence.
And in particular, in re-reading that, I noticed that Eliezer had hit upon Andrew Gelman's point as well:
Yet in my estimation, the most damaging aspect of using other authors' imaginations is that it stops people from using their own. As Robert Pirsig said:
"She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say. She couldn't think of anything to write about Bozeman because she couldn't recall anything she had heard worth repeating. She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before."
Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all."
I suggest that your Starbucks/Main Street example is a bad one, since these are rather specific details over which a given person's daily experience is likely to produce an accurate posterior distribution.
There's a confusion regarding the example, due to my writing, because I meant to argue that the map of Nashville would not be useful for navigating Memphis. My thesis (however buried) was that a person can use anecdotes (fabricated or not) to evaluate how compelling an idea is. By analogy with the locations of Starbucks in different cities, I don't buy...
Andrew Gelman has a post up today discussing a particularly illustrative instance of narrative fallacy involving the recent plagiarism discussion surrounding Karl Weick. I think there are also some interesting lessons in there about generalizing from fictional evidence.
In particular, Gelman says, "Setting aside [any] issues of plagiarism and rulebreaking, I argue that by hiding the source of the story and changing its form, Weick and his management-science audience are losing their ability to get anything out of it beyond empty confirmation."
I am wondering if anyone has explicitly looked into connections between generalizing from fictional evidence and confirmation bias. It sounds intuitively plausible that if you are going to manipulate fictional evidence for your purposes, you'll almost always come out believing the evidence has confirmed your existing beliefs. I would be highly interested in documented accounts where the opposite has happened and fictional evidence actually served as a correction factor.
For what it's worth, I personally enjoy a watered-down version of the moral that Weick attempts to manipulate from the story that's discussed in Gelman's post. My high school math teacher used to always say to us, "When you don't know what to do, do something." I think he said it because he was constantly pissed about questions left completely blank on his math exams, and wanted students to write down scribblings or ideas so he could at least give them some partial credit, but it has been more motivational than that for me.