I don't see why fictional evidence shouldn't be treated exactly the same as real evidence, as long as you don't mix up the referents. There is no fundamental use in singling out reality (there might be some practical use in specializing human minds to reality rather than fiction). Generalization from real evidence to fiction is as much a fallacy as generalization from fictional evidence to reality.
A fiction text is a model of fictional territory that can be used to get some idea of what that territory is like (found with the prior of fictional worlds), and to formulate better models, or models of similar worlds (fanfiction). Statements made in a fiction text can be false about the fictional territory, in which case they are misleading and interfere with learning about the fictional territory. Other statements are good evidence about it. One should be confused by false statements about a fictional territory, but shouldn't be confused by true statements about it. And so on and so forth.
I think not mixing up the referents is the hard part. One can properly learn from fictional territory when they can clearly see in which ways it's a good representation of reality, and where it's not.
I may learn from an action movie the value of grit and what it feels like to have principles, but I wouldn't trust them on gun safety or CPR.
It's not common for fiction to be self-consistent enough and preserve drama. Acceptable breaks from reality will happen, and sure, sometimes you may have a hard SF universe were the alternate reality is very lawful and the plot arises from the logical consequences of these laws (often happens in rationalfic), but more often than not things happen "because it serves the plot".
My point is, yes, I agree, one should be confused only by lack of self-consistency fiction or not. Yet, given the vast amount of fiction that is set in something close to real Earth, by the time you're skilled enough to tell apart what's transferable and what isn't, you've already done most of the learning.
Not counting the meta-skill of detecting inconsistencies, which is indeed extremely useful, for fiction or not, but I'm still unclear where exactly one learns it from.
A mostly off-topic note on the conceptual picture I was painting. The fictional world was intended to hold entities of the same ontological kind as those from the real world. A fiction text serves as a model and evidence for it, not as a precise definition. Thus an error in the text is not directly an inconsistency in the text, the text is intended to be compared against the fictional world, not against itself. Of course in practice the fictional world is only accessible through a text, probably the same one where we are seeing the error, but there is this intermediate step of going through a fictional world (using another model, the state of uncertainty about it). Similarly to how the real world is only accessible through human senses, but it's unusual to say that errors in statements about the world are inconsistencies in sensory perception.
This is definitely the big "useful" thing I get out of fiction: being forced to experience things a different way. I have so many examples of reading or watching something that made me realize one aspect of my self-image, my thinking or my personality that was in my way.
With the adequate amount of doubt for fictional evidence, fiction definitely makes me stronger.
(And it's also awesome)
Fiction can be inaccurate, in a way you do not notice. Then what is your reaction to the fiction about?
For example, someone may incorrectly believe that "if we do X, Y will happen". Then they write a novel, where the protagonist did X, as a result Y happened, and then everyone was happy. You empathize with the protagonist's efforts, and you are very happy about their success at the end of the novel.
If your conclusion is "X is good", I believe that is a serious mistake. This way you are vulnerable to propaganda; as long as someone correctly guesses your favorite Y, they can make you support any X by writing a plausibly sounding novel where X leads to Y.
If your conclusion is merely "Y is good" (without buying the - supposedly incorrect - premise that X would lead to Y), uhm, maybe. There is the same problem on the smaller level: the novel says that situation Y created an emotional reaction Z in people, but maybe in real life, the emotional reaction would be different. People are bad at predicting what would make them happy. Things that seem awesome in far mode can be quite boring in near mode, and vice versa.
When you react to fictional evidence, I think it is very difficult not to include some of the "fictional causality", which may be actually wrong.
Alternate title: learning from fictional evidence. I've seen echoes of this idea elsewhere but couldn't find a description that suits me.
My main idea is: you can update from your observed reaction to fiction and/or counterfactuals.
The fallacy of generalizing from fictional evidence happens when you treat events having happened in fiction, following the rules of good writing rather than verisimilitude, as observations of reality. The facts may be wrong but, if you suspend your disbelief for a while and get immersed in the story, your emotional reaction will be real.
Compare this to counterfactual reasoning or forecasting scenarios. You explicitly build alternate versions of reality. You will actually experience only one. (Others may belong to other Everett branches if they're consistent with your past, but not yours!)
Here as well, you will be considering your reaction to those alternate version. Is this possible future worth avoiding? Worth fighting for? Ideally, this evaluation should be separate from how much probability mass you assign to this particular scenario.
In his original essay, Yudkowsky critiques the jump from "sci-fi displays a compelling narrative" to "sci-fi portrays realistic/probable scenarios". Indeed there's a jump which shouldn't be made between the cherry-picked manufactured whatever scenario, and careful forecasting.
Even so, the fact that one could believe it tells you something about the current state of affairs.
Not about what is actually happening (or has happened or would happen), but about you. The fact that you could believe a particular scenario says something about your ability to be confused by fiction vs. reality. If, when breaking out of immersion, not longer suspending disbelief, belief remains, it should give you pause.
More importantly, if the fictional narrative gets dismissed, the emotional impact shouldn't be. I can't find the original quote: "nonfiction is about conveying fact, fiction is about conveying experience". Your feelings are valid. One point of rationalist fiction is to immerse you in situations where characters solve problems, reason carefully, work with their emotions instead of against them.
Personal example. Consider this Doctor Who clip, where a struggling Vincent van Gogh is taken to 2010 to receive high praise from a museum curator. It never happened. Yet, the acting, the tragedy, inspires me and gives me motivation. Same goes for HPMoR, which definitely never happened and still moved me.
This is "fictional evidence" not by the plot being real, but by the evidence of a reaction to fiction being real.
So what kind of updates can you make on it? It helps build, for me, a better model of my own motivation.
That my judgment about some art or page of mine being worthless should be anchored by other people's feelings rather than my own (occasional) despair. That there is strength to be found in solving problems collectively. That it's okay to pause and think, and take time for yourself.
All of this and more, I learned from fiction before experiencing it myself. Same goes from all the existential risk scenarios that didn't happen, and hopefully never will: my reaction to them still informs my decisions today.
This is distinct from Wei Dai's fictional insight, where fiction exposes you to a particular idea/hypothesis/thought-pattern you hadn't considered before and expands your mindset. Here, I focused on how your relationship with not-our-universe can inform your relationship with reality, by virtue of being experienced by the same brain.
Thanks to Adam Shimi for his feedback on the draft of this post.