I'm a LW reader, two time CFAR alumnus, and rationalist entrepreneur.
Today I want to talk about something insidious: marketing studies.
Until recently I considered studies of this nature merely unfortunate, funny even. However, my recent experiences have caused me to realize the situation is much more serious than this. Product studies are the public's most frequent interaction with science. By tolerating (or worse, expecting) shitty science in commerce, we are undermining the public's perception of science as a whole.
The good news is this appears fixable. I think we can change how startups perform their studies immediately, and use that success to progressively expand.
Product studies have three features that break the assumptions of traditional science: (1) few if any follow up studies will be performed, (2) the scientists are in a position of moral hazard, and (3) the corporation seeking the study is in a position of moral hazard (for example, the filing cabinet bias becomes more of a "filing cabinet exploit" if you have low morals and the budget to perform 20 studies).
I believe we can address points 1 and 2 directly, and overcome point 3 by appealing to greed.
Here's what I'm proposing: we create a webapp that acts as a high quality (though less flexible) alternative to a Contract Research Organization. Since it's a webapp, the cost of doing these less flexible studies will approach the cost of the raw product to be tested. For most web companies, that's $0.
If we spend the time to design the standard protocols well, it's quite plausible any studies done using this webapp will be in the top 1% in terms of scientific rigor.
With the cost low, and the quality high, such a system might become the startup equivalent of citation needed. Once we have a significant number of startups using the system, and as we add support for more experiment types, we will hopefully attract progressively larger corporations.
Is anyone interested in helping? I will personally write the webapp and pay for the security audit if we can reach quorum on the initial protocols.
Companies who have expressed interested in using such a system if we build it:
- Beeminder
- HabitRPG
- MealSquares
- Complice (disclosure: the CEO, Malcolm, is a friend of mine)
- General Biotics (disclosure: the CEO, David, is me)
(I sent out my inquiries at 10pm yesterday, and every one of these companies got back to me by 3am. I don't believe "startups love this idea" is an overstatement.)
So the question is: how do we do this right?
Here are some initial features we should consider:
- Data will be collected by a webapp controlled by a trusted third party, and will only be editable by study participants.
- The results will be computed by software decided on before the data is collected.
- Studies will be published regardless of positive or negative results.
- Studies will have mandatory general-purpose safety questions. (web-only products likely exempt)
- Follow up studies will be mandatory for continued use of results in advertisements.
- All software/contracts/questions used will be open sourced (MIT) and creative commons licensed (CC BY), allowing for easier cross-product comparisons.
Any placebos used in the studies must be available for purchase as long as the results are used in advertising, allowing for trivial study replication.
Significant contributors will receive:
- Co-authorship on the published paper for the protocol.
- (Through the paper) an Erdos number of 2.
- The satisfaction of knowing you personally helped restore science's good name (hopefully).
I'm hoping that if a system like this catches on, we can get an "effective startups" movement going :)
So how do we do this right?
Thanks for pointing this out.
Let's use Beeminder as an example. When I emailed Daniel he said this: "we've talked with the CFAR founders in the past about setting up RCTs for measuring the effectiveness of beeminder itself and would love to have that see the light of day".
Which is a little open ended, so I'm going to arbitrarily decide that we'll study Beeminder for weight loss effectiveness.
Story* as follows:
Daniel goes to (our thing).com and registers a new study. He agrees to the terms, and tells us that this is a study which can impact health -- meaning that mandatory safety questions will be required. Once the trial is registered it is viewable publicly as "initiated".
He then takes whatever steps we decide on to locate participants. Those participants are randomly assigned to two groups: (1) act normal, and (2) use Beeminder to track exercise and food intake. Every day the participants are sent a text message with a URL where they can log that day's data. They do so.
After two weeks, the study completes and both Daniel and the world are greeted with the results. Daniel can now update Beeminder.com to say that Beeminder users lost XY pounds more than the control group... and when a rationalist sees such claims they can actually believe them.
Even if the group assignments are random, the prior step of participant sampling could lead to distorted effects. For example, the participants could be just the friends of the person who created the study who are willing to shill for it.
The studies would be more robust if your organization took on the responsibility of sampling itself. There is non-trivial scientific literature on the benefits and problems of using, for example, Mechanical Turk and Facebook ads for this kind of work. There is extra value added for the user/client here, which is that the participant sampling becomes a form of advertising.