badger comments on What are you working on? June 2012 - Less Wrong

2 Post author: David_Gerard 03 June 2012 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread.

Comment author: badger 03 June 2012 01:47:43PM 17 points [-]

I'm doing mechanism design for eliciting information without money. Most people here are aware of scoring rules and prediction markets, which reward participants according to the accuracy of their predictions. Drazen Prelec's Bayesian truth serum (BTS) is an alternate mechanism that rewards predictions relative to the answers of others instead of the actual event. Since verification is done internally, the mechanism works for questions that would be difficult or impossible to evaluate on a prediction market, e.g. "Will super-human AI be built in the next 100 years?" or "Which of these ten novels was the most innovative and ground-breaking?".

All three types of mechanisms assume the participants want to maximize their score from the mechanism. In many circumstances though, people care much more about influencing the outcome of the mechanism than their score or payment. Consider a committee making a high stakes decision, like whether to fire an executive officer. Paying committee members based on their predictions would be gauche. Scores could be ignored if it meant getting a favored outcome, so BTS is easily manipulated without money. The usual fallback of majority vote is non-manipulable, but can fail to uncover the correct answer if participants are biased. BTS outputs the right answer with enough participants, even with bias. To ensure truth telling in Nash equilibrium, BTS does depends on participants having a common prior, although the mechanism operator doesn't have to know what it is.

So far, I have mechanisms that encourage honesty without money, don't depend on a common prior or specific belief formation processes, and capture ~80% of the potential gains over majority vote in simulations. The operation of the mechanism is fairly straightforward, although why is works is another question. I'm still trying to grasp what makes one mechanism estimate the state better than another, what the optimal mechanism is, or whether an optimal mechanism even exists given my constraints.

My primary focus is writing this up. At some point, I want to deploy a web app for polls on LW. I suspect this would be trivial for someone with actual development experience. I'm open to collaboration on the econ/stats or development side, so PM me if interested.

Comment author: cousin_it 03 June 2012 06:06:33PM 6 points [-]

When you have a draft, can you post it to the discussion section of LW? I am very interested in these things.

Comment author: badger 04 June 2012 12:54:54PM 0 points [-]

Will do.

Comment author: wedrifid 04 June 2012 04:01:53AM -1 points [-]

The usual fallback of majority vote is non-manipulable

Why? Isn't it just more expensive to manipulate? More people to bribe and all?

Comment author: badger 04 June 2012 12:54:07PM 0 points [-]

Individual participants don't want to manipulate their own vote between two candiates, absent external incentives. Incentive compatible is more accurate than non-manipulable. Votes are manipulable through sybil attacks -- voting many times under false identities.

I'm treating majority vote as the status quo to improve upon, with incentive compatibility the basic standard for new mechanisms. Also eliminating vulnerability to sybil attacks would be great, but not high priority.