"Statistically significant results" mean that there's a 5% chance that results are wrong in addition to chance that the wrong thing was measures, chance that sample was biased, chance that measurement instruments were biased, chance that mistakes were made during analysis, chance that publication bias skewed results, chance that results were entirely made up and so on.
"Not statistically significant results" mean all those, except chance of randomly mistaken results even if everything was ran correct is not 5%, but something else, unknown, and dependent of strength of the effect measured (if the effect is weak, you can have study where chance of false negative is over 99%).
So results being statistically significant or not, is really not that useful.
For example, here's a survey of civic knowledge. Plus or minus 3% measurement error? Not this time, they just completely made up the results.
Take home exercise - what do you estimate Bayesian chance of published results being wrong to be?
Wrong. It means that the researcher defined a class of results such that the class had less than a 5% chance of occurring if the null hypothesis were true, and that the actual outcome fell into this class.
There are all sorts of things that can go wrong with that, but, even leaving all those aside, it doesn't mean there's a 5% chance the results are wrong. Suppose you're investigating psychic powers, and that the journals have (as is usually the case!) a heavy publication bias toward positive results. Then the journal will be full of statistically significant results and they will all be wrong.
I'm confused by your remark. "5% chance of false positive" obviously means P(positive results|null hypothesis true)=5%, P(null hypothesis true|positive results) is subjective and has no particular meaningful value, so I couldn't have talked about that.