I take it the claim is roughly that people who make an effort to tackle sexual violence don't actually believe the percentage of women on campus who've experienced sexual violence is around 20% and think it's lower (while the 20% is just a belief in belief).
Once I was at a lecture about violence against women, where the lecturer told us that 20% of women are victims of domestic violence. I asked her if she knows in which country and approximately which decade was this research done; suggesting that the results for e.g. Sweden could be different than for e.g. Afghanistan, and also maybe the results now could be different from e.g. half century ago.
She said that the research was replicated many times, and that no matter which country, or which year -- or even which definition of domestic violence was used! -- the results are always 20%. Somewhat ironically, after hearing about so much successful replication, my faith in the research actually decreased.
Maybe "20%" is some psychological attractor, where all values smaller than half naturally converge.
I also read somewhere an explanation (but haven't verified it) that the number of rapes at campus was achieved by surveying students in the first grade, adding together the results for "rape" and "attempted rape", and then multiplying the result by five (for five years of study). If that's true, even ignoring the "attempted" part, I think the linear approximation is wrong.
First, it ignores the possibility that some factors could make being raped more likely in some parts of the population (such as binge drinking, or choosing violent boyfriends, or maybe just being in a really horrible campus), so being raped at grade X may overlap strongly with being raped at grade Y, and R(X ∪ Y) < R(X) + R(Y). Second, this approach multiplies the Lizardman’s Constant by five, which coincidentally already provides the result of 20%.
I don't want to depreciate a serious issue, but I really wish that people doing research would start taking methodology more seriously.
I don't want to depreciate a serious issue, but I really wish that people doing research would start taking methodology more seriously.
It seems to me that the woman with whom you were speaking likely wasn't well versed in the research. Why do you think the conversation to her tells you much about whether the people doing the reserach take methodology seriously?
I've started a podcast called Future Strategist which will focus on decision making and futurism. I have created seven shows so far: interviews of computer scientist Roman Yampolskiy, LW contributor Gleb Tsipursky, and artist/free speech activist Rachel Haywire, and monologues on game theory and Greek Mythology, the Prisoners' Dilemma, the sunk cost fallacy, and the Map and Territory.
If you enjoy the show and use iTunes I would be grateful if you left a positive review at iTunes. I would also be grateful for any feedback you might have including suggestions for future shows. I'm not used to interviewing people and I know that I need to work on being more articulate in my interviews.