Failures to set up or follow proper experimental procedures (giving hints, not fully random presentation, etc) or otherwise introducing a slight biasing effect will show an effect which is puny. With low n, this won't be statistically significant, but with high n it will appear very statistically significant.
That's true, statistical significance isn't the most sophisticated statistic. My rule of thumb is looking at the p and d values.
According to the New Scientist, Daryl Bern has a paper to appear in Journal of Personality and Social Psychology, which claims that the participants in psychological experiments are able to predict the future. A preprint of this paper is available online. Here's a quote from the New Scientist article:
Question: even assuming the methodology is sound, given experimenter bias, publication bias and your priors on the existence of psi, what sort of p-values would you need to see in that paper in order to believe with, say, 50% probability that the effect measured is real?