I was reading Yvain's Generalizing from One Example, which talks about the typical mind fallacy. Basically, it describes how humans assume that all other humans are like them. If a person doesn't cheat on tests, they are more likely to assume others won't cheat on tests either. If a person sees mental images, they'll be more likely to assume that everyone else sees mental images.
As I'm wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.
Eg, most employers ask "have you ever stolen from a job before," and have to deal with misreporting because nobody in their right mind will say yes. However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?" and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat. It could cut through all sorts of social desirability distortion effects. You couldn't get the exact likelihood, but it would give more useful information than you would get with a direct question.
In hindsight, which is always 20/20, it seems incredibly obvious. I'd be surprised if professional personality tests and sociologists aren't using these types of questions. My google-fu shows no hits, but it's possible I'm just not using the correct term that sociologists use. I'm was wondering if anyone had heard of this questioning method before, and if there's any good research data out there showing just how much you can infer from someone's deviance from the median response.
I'm not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.
I'm not sure what you think my disagreement should have focused on - the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons - as long as most of us refrain from using this sort of approach, it can work for the few that do not understand the common good or do not care. Hence the Dutch dike example. This method is a social wrong, not necessarily a personal wrong - it may work well in general circumstances. Furthermore it is, in certain sense that is normally quite obvious, unfair. (edit: It is nonetheless widely used, of course - even with all the social mechanisms in human psyche, the human brain is a predictive system that will use any correlations it encounters - and people do often signal their pretend non-understanding -I suspect this silly game significantly discriminates against Aspergers spectrum disorders).
edit: For a more conspicuous example of how predictive methods are not in general socially acceptable, consider that if I am to train a predictor of criminality on the profile data complete with a photograph, the skin albedo estimate from the photograph will be a significant part of the predictor assuming that the data processed originates in north America. Matter of fact I have some experience with predictive software of the kind that processes interview answers. Let me assure you, my best guess from briefly reading your profile is that you are not in the category of people who benefit from this software that just uses all correlations it finds - pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.
Thinking about this, I'm curious about the last part.
Naively, it seems to me that if I'm being evaluated by a system and i know that the system penalizes respondents who have non standard interests and do not answer questions in the standard ways, but I don't have access to the model being used, then if I want to improve my score... (read more)