gwern comments on Open Thread, September, 2010-- part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (858)
So, given that we've got a high concentration of technical people around here, maybe someone can answer this for me:
Could it ever be possible to do some kind of counter-data mining?
Everybody has some publicly-available info on the internet -- information that, in general, we actually want to be publicly available. I have an online presence, sometimes under my real name and sometimes under aliases, and I wouldn't want to change that.
But data mining is, of course, a potential privacy nightmare. There are algorithms that can tell if you're gay from your facebook page, and reassemble your address and social security number from aggregating apparently innocuous web content. There's even a tool (www.recordedfuture.com) that purportedly helps clients like the CIA predict subjects' future movements. But so far, I've never heard of attempts to make data mining harder for the snoops. I'm not talking about advice like "Don't put anything online you wouldn't want in the newspaper." I'm interested in technical solutions -- the equivalent of cryptography.
It's a pipe dream, but it might not be impossible. Here's Wikipedia background, with good additional references, for nonlinear dimensionality reduction techniques, which is one of my academic interests. (http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction) These techniques involve taking a cloud of points in a high-dimensional space, and deciphering the low-dimensional manifold on which they lie. In other words, extracting salient information from data. And there are standard manifolds where various techniques are known to fail -- it's hard for algorithms to recognize the "swiss roll," for instance.
These hard cases are disappointments for the data miner, but they ought to be opportunities for the counter-data miner, right? Could it be possible to exploit the hard cases to make it more difficult for the snoops? One practical example of something like this already exists: the distorted letters in a CAPTCHA are "hard cases" for automated image recognition software.
Does anybody have thoughts on this?
My general thought is that so little data is needed to identify you, that the dataset can be enormously noisy and still identify you. And if your fake data is just randomly generated, isn't that all it is, noise?
(I saw a paper about medical datasets, I think, that showed that you couldn't anonymize the data successfully and still have a useful dataset; I don't have it handy, but it's not hard to find people saying things like, with the Netflix dataset, that it can't be done: http://33bits.org/2010/03/15/open-letter-to-netflix/ )
I've heard about the medical datasets.
Noise is a pretty interesting thing, and the possibility of "denoising" depends a lot on the kind of noise. White noise is the easiest to get rid of; malicious noise, which isn't random but targeted to be "worst-case," can thwart denoising methods that were designed for white noise.