I'm a LW reader, two time CFAR alumnus, and rationalist entrepreneur.
Today I want to talk about something insidious: marketing studies.
Until recently I considered studies of this nature merely unfortunate, funny even. However, my recent experiences have caused me to realize the situation is much more serious than this. Product studies are the public's most frequent interaction with science. By tolerating (or worse, expecting) shitty science in commerce, we are undermining the public's perception of science as a whole.
The good news is this appears fixable. I think we can change how startups perform their studies immediately, and use that success to progressively expand.
Product studies have three features that break the assumptions of traditional science: (1) few if any follow up studies will be performed, (2) the scientists are in a position of moral hazard, and (3) the corporation seeking the study is in a position of moral hazard (for example, the filing cabinet bias becomes more of a "filing cabinet exploit" if you have low morals and the budget to perform 20 studies).
I believe we can address points 1 and 2 directly, and overcome point 3 by appealing to greed.
Here's what I'm proposing: we create a webapp that acts as a high quality (though less flexible) alternative to a Contract Research Organization. Since it's a webapp, the cost of doing these less flexible studies will approach the cost of the raw product to be tested. For most web companies, that's $0.
If we spend the time to design the standard protocols well, it's quite plausible any studies done using this webapp will be in the top 1% in terms of scientific rigor.
With the cost low, and the quality high, such a system might become the startup equivalent of citation needed. Once we have a significant number of startups using the system, and as we add support for more experiment types, we will hopefully attract progressively larger corporations.
Is anyone interested in helping? I will personally write the webapp and pay for the security audit if we can reach quorum on the initial protocols.
Companies who have expressed interested in using such a system if we build it:
- Beeminder
- HabitRPG
- MealSquares
- Complice (disclosure: the CEO, Malcolm, is a friend of mine)
- General Biotics (disclosure: the CEO, David, is me)
(I sent out my inquiries at 10pm yesterday, and every one of these companies got back to me by 3am. I don't believe "startups love this idea" is an overstatement.)
So the question is: how do we do this right?
Here are some initial features we should consider:
- Data will be collected by a webapp controlled by a trusted third party, and will only be editable by study participants.
- The results will be computed by software decided on before the data is collected.
- Studies will be published regardless of positive or negative results.
- Studies will have mandatory general-purpose safety questions. (web-only products likely exempt)
- Follow up studies will be mandatory for continued use of results in advertisements.
- All software/contracts/questions used will be open sourced (MIT) and creative commons licensed (CC BY), allowing for easier cross-product comparisons.
Any placebos used in the studies must be available for purchase as long as the results are used in advertising, allowing for trivial study replication.
Significant contributors will receive:
- Co-authorship on the published paper for the protocol.
- (Through the paper) an Erdos number of 2.
- The satisfaction of knowing you personally helped restore science's good name (hopefully).
I'm hoping that if a system like this catches on, we can get an "effective startups" movement going :)
So how do we do this right?
I was thinking something like the karma score here. People could comment on the data and the math that leads to the conclusions, and debunk the ones that are misleading. A problem would be that, If you allow endorsers, rather than just debunkers, you could get in a situation where a sponsor pays people to publicly accept the conclusions. Here are my thoughts on how to avoid this.
First, we have to simplify the issue down to a binary question: does the data fairly support the conclusion that the sponsor claims? The sponsor would offer $x for each of the first Y reviewers with a reputation score of at least Z. They have to pay regardless of what the reviewer's answer to the question is. If the reviewers are unanimous, then they all get small bumps to their reputation. If they are not unanimous, then they see each others' reviews (anonymously and non-publicly at this point) and can change their positions one time. After that, those who are in the final majority and did not change their position get a bump up in reputation, but only based on the number of reviewers who switched to be in the final majority. (I.e. we reward reviewers who persuade others to change their position.) The reviews are then opened to a broader number of people with positive reputations, who can simply vote yes or no, which again affects the reputations of the reviewers. Again, voting is private until complete, then people who vote with the majority get small reputation bumps. At the conclusion of the process, everyone's work is made public.
I'm sure that there are people who have thought about reputation systems more than I have. But I have mostly seen reputation systems as a mechanism for creating a community where certain standards are upheld in the absence of monetary incentives. A reputation system that is robust against gaming seems difficult.
Max L.
I'm very glad I asked for more clarification. I'm going to call this system The Reviewer's Dilemma, it's a very interesting solution for allowing non-software analysis to occur in a trusted manner. I am somewhat worried about a laziness bias (it's much easier to agree than disprove), but I imagine that there is a similar bounty for overturning previous results this might be handled.
I'll do a little customer development with some friends, but the possibility of reviewers being added as co-authors might also act as a nice incentive (both to reduce laziness, and as addition compensation).