I was reading Yvain's Generalizing from One Example, which talks about the typical mind fallacy. Basically, it describes how humans assume that all other humans are like them. If a person doesn't cheat on tests, they are more likely to assume others won't cheat on tests either. If a person sees mental images, they'll be more likely to assume that everyone else sees mental images.
As I'm wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.
Eg, most employers ask "have you ever stolen from a job before," and have to deal with misreporting because nobody in their right mind will say yes. However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?" and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat. It could cut through all sorts of social desirability distortion effects. You couldn't get the exact likelihood, but it would give more useful information than you would get with a direct question.
In hindsight, which is always 20/20, it seems incredibly obvious. I'd be surprised if professional personality tests and sociologists aren't using these types of questions. My google-fu shows no hits, but it's possible I'm just not using the correct term that sociologists use. I'm was wondering if anyone had heard of this questioning method before, and if there's any good research data out there showing just how much you can infer from someone's deviance from the median response.
Great insight! Unsurprisingly, you're not the first. To my knowledge though, this method doesn't have a standard name and isn't prevalent. Predictions about others might give more information, but are still manipulable and hard to interpret when comparing respondents to each other. Did this person say lots of others cheat because they cheat or because they are bad with probabilities?
Alternatively, if you have a question with a single underlying answer, predictions about opinions are potentially useful for filtering out bias. This is the idea behind Prelec's Bayesian truth serum. Respondents maximize their payments from the system by being honest, and the group with the highest average scores tends be correct.
Or because they'd spent time around cheaters who talked about it?
I wonder what sort of answer a competent forensic accountant would give.