The standard form for correlation coefficient is
cov(x,y)=N(-)
where N is normalisation; it seems that you suppose that =0 and <f'> finite, then. =0 follows from boundedness, but for the derivative it's not clear. If <f'> on (a,b) grows more rapidly than (b-a), anything can happen.
If <f'> on (a,b) grows more rapidly than (b-a)
This cannot happen. f is assumed bounded. Therefore the average of f' over the interval [a,b] tends to zero as the bounds go to infinity.
The precise, complete mathematical statement and proof of the theorem does involve some subtlety of argument (consider what happens if f = sin(exp(x))) but the theorem is correct.
It is a commonplace that correlation does not imply causality, however eyebrow-wagglingly suggestive it may be of causal hypotheses. It is less commonly noted that causality does not imply correlation either. It is quite possible for two variables to have zero correlation, and yet for one of them to be completely determined by the other.