I've been working in differential privacy for the past 5 years; given that it doesn't come up often unprompted, I was surprised and pleased by this question.
Short answer: no, generalizability does not imply differential privacy, although differential privacy does imply generalizability, to a large degree.
The simplest reason for this is that DP is a property that holds for all possible datasets, so if there is even one pathological data set for which your algorithm overfits, it's not DP, but you can still credibly say that it generalizes.
(I have to go, so I'm posting this as it, and I will add more later if you're interested.)
Thanks and yes please add! For example, if DP implies generalization, then why isn’t every one trying to complete backprop with DP principles to make a (more robust)/(better at privacy) learning strategy? Or that’s what every one tried but it’s trickier than it seems?
edit: see below for clarifications by expert domain rpglover64 and a good pick of references from the gears to ascenscion. My one (cheatingly long) sentence takeaway is: it’s clear training does not automatically lead to DP, it’s unclear if DP can always or seldom help training, it’s likely that easy algorithms are not available yet, it’s unlikely that finding one is low hanging fruit.
From wikipedia, « an algorithm is differentially private if an observer seeing its output cannot tell if a particular individual's information was used in the computation ». In other words, if some training process asymptotically converges toward generalisable knowledge only, then it should tend to become differentially private.
…or so it seems to me, but actually I’ve no idea if that’s common knowledge in ml- or crypto- educated folks, versus it’s pure personal guess and there’s no reason to believe that. What do you see as the best argument for or against that idea? Any guess on how to disprove or prove it?
Extra Good Samaritan point : my english sucks, so any comment rewriting this post in good english, even if for minor details, is a great help thank you.
This is version 0.1