I was reading Yvain's Generalizing from One Example, which talks about the typical mind fallacy. Basically, it describes how humans assume that all other humans are like them. If a person doesn't cheat on tests, they are more likely to assume others won't cheat on tests either. If a person sees mental images, they'll be more likely to assume that everyone else sees mental images.
As I'm wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.
Eg, most employers ask "have you ever stolen from a job before," and have to deal with misreporting because nobody in their right mind will say yes. However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?" and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat. It could cut through all sorts of social desirability distortion effects. You couldn't get the exact likelihood, but it would give more useful information than you would get with a direct question.
In hindsight, which is always 20/20, it seems incredibly obvious. I'd be surprised if professional personality tests and sociologists aren't using these types of questions. My google-fu shows no hits, but it's possible I'm just not using the correct term that sociologists use. I'm was wondering if anyone had heard of this questioning method before, and if there's any good research data out there showing just how much you can infer from someone's deviance from the median response.
Prelec's formal results hold for large populations, but it held up well experimentally with 30-50 participants
Witkowski and Parkes develop a truth serum for binary questions with as few as 3 participants. Their mechanism also avoid the potentially unbounded payments required by Prelec's BTS. Unfortunately the WP truth serum seems very sensitive to the common prior assumption.
Wait, wait, let me understand this. It's the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn't even have a full understanding of the system's internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.