Perhaps this comes under "Reduced need for memorization" but when someone says "deeply" I assume they mean understanding the underlying principles - specifically understanding the limitations of the tools being used:
An extremely trivial example might be how often people in businesses communicate using measures of central tendency (mean) but almost never talk about spread (standard deviation). Yet the SD is as important as the mean.
Perhaps less trivial might be that the analysis of small samples (N < 50) often use T-Statistics. This kind of test requires that your null-hypothesis is normally distributed. Often there are lots of things where this can be reasonably assumed (or has been established) however I've come across places where the assertion was at least questionable.
More advanced might knowing which tools yield a conservative result. Bonferroni for example is a method for handling multiple comparisons. However if the samples are related it can actually yield a false negative.
Perhaps this comes under "Reduced need for memorization" but when someone says "deeply" I assume they mean understanding the underlying principles - specifically understanding the limitations of the tools being used:
An extremely trivial example might be how often people in businesses communicate using measures of central tendency (mean) but almost never talk about spread (standard deviation). Yet the SD is as important as the mean.
Perhaps less trivial might be that the analysis of small samples (N < 50) often use T-Statistics. This kind of test requires that your null-hypothesis is normally distributed. Often there are lots of things where this can be reasonably assumed (or has been established) however I've come across places where the assertion was at least questionable.
More advanced might knowing which tools yield a conservative result. Bonferroni for example is a method for handling multiple comparisons. However if the samples are related it can actually yield a false negative.