DanielVarga comments on Exploiting the Typical Mind Fallacy for more accurate questioning? - Less Wrong

31 Post author: Xachariah 17 July 2012 12:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (72)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 17 July 2012 01:54:15AM 10 points [-]

Actually, that's Yvain's post, not mine...

As I'm wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.

Yep! This is actually a standard method: ask people to estimate what they think other people do. A version of this is the 'Bayesian truth serum' trick.

Comment author: DanielVarga 17 July 2012 02:23:32PM 1 point [-]

The 'truth serum' property of the method is only proved for infinite populations. Intuitively it seems quite clear to me that for small populations, the method can be gamed easily. Do you know of any results on the robustness of the method regarding population size when there is incentive to mislead?

Comment author: badger 17 July 2012 02:55:51PM 2 points [-]

Prelec's formal results hold for large populations, but it held up well experimentally with 30-50 participants

Witkowski and Parkes develop a truth serum for binary questions with as few as 3 participants. Their mechanism also avoid the potentially unbounded payments required by Prelec's BTS. Unfortunately the WP truth serum seems very sensitive to the common prior assumption.

Comment author: DanielVarga 17 July 2012 06:13:57PM *  0 points [-]

Prelec's formal results hold for large populations, but it held up well experimentally with 30-50 participants

Wait, wait, let me understand this. It's the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn't even have a full understanding of the system's internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.

Comment author: badger 17 July 2012 06:46:05PM 1 point [-]

Manipulability is addressed experimentally in a different working paper. The participants weren't told the internals and the manipulations were mostly hypothetical, but honesty was the highest scoring strategy in what they considered.

In some sense, it's easy to manipulate BTS to give a particular answer. The only problem is you might end up owing the operator incredibly large sums of money. If payments to and from the mechanism aren't being made, BTS is worthless if people try to game it. I should have a post up shortly about a better mechanism.

Comment author: gwern 17 July 2012 02:45:55PM 0 points [-]

No. In one of the posts or papers, I know I saw some comments or discussion that it is deceivable (and so you wouldn't necessarily want to explain the procedure) but the obvious way doesn't work.