This is the bimonthly 'What are you working On?' thread. Previous threads are here. So here's the question:
What are you working on?
Here are some guidelines:
- Focus on projects that you have recently made progress on, not projects that you're thinking about doing but haven't started.
- Why this project and not others? Mention reasons why you're doing the project and/or why others should contribute to your project (if applicable).
- Talk about your goals for the project.
- Any kind of project is fair game: personal improvement, research project, art project, whatever.
- Link to your work if it's linkable.
I'm doing mechanism design for eliciting information without money. Most people here are aware of scoring rules and prediction markets, which reward participants according to the accuracy of their predictions. Drazen Prelec's Bayesian truth serum (BTS) is an alternate mechanism that rewards predictions relative to the answers of others instead of the actual event. Since verification is done internally, the mechanism works for questions that would be difficult or impossible to evaluate on a prediction market, e.g. "Will super-human AI be built in the next 100 years?" or "Which of these ten novels was the most innovative and ground-breaking?".
All three types of mechanisms assume the participants want to maximize their score from the mechanism. In many circumstances though, people care much more about influencing the outcome of the mechanism than their score or payment. Consider a committee making a high stakes decision, like whether to fire an executive officer. Paying committee members based on their predictions would be gauche. Scores could be ignored if it meant getting a favored outcome, so BTS is easily manipulated without money. The usual fallback of majority vote is non-manipulable, but can fail to uncover the correct answer if participants are biased. BTS outputs the right answer with enough participants, even with bias. To ensure truth telling in Nash equilibrium, BTS does depends on participants having a common prior, although the mechanism operator doesn't have to know what it is.
So far, I have mechanisms that encourage honesty without money, don't depend on a common prior or specific belief formation processes, and capture ~80% of the potential gains over majority vote in simulations. The operation of the mechanism is fairly straightforward, although why is works is another question. I'm still trying to grasp what makes one mechanism estimate the state better than another, what the optimal mechanism is, or whether an optimal mechanism even exists given my constraints.
My primary focus is writing this up. At some point, I want to deploy a web app for polls on LW. I suspect this would be trivial for someone with actual development experience. I'm open to collaboration on the econ/stats or development side, so PM me if interested.
Why? Isn't it just more expensive to manipulate? More people to bribe and all?