Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
I have lost my trust in this community’s epistemic integrity, no longer see my values as being in accord with it, and don’t see hope for change. I am therefore taking an indefinite long-term hiatus from reading or posting here.
Correlation does imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications prove there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by something already begun?
Or was it their coincidence that caused it to be done?
Thanks for the nice comment. I tried using it several times IIRC, but I don’t think it helped. It was written in reaction to some mounting frustrations with interactions I was having, and I ultimately mostly stopped participating on LW (though that was a combination of factors).
Great, that's clarifying. I will start with Tamiflu/Xofluza efficacy as it's important, and I think it will be most tractable via a straightforward lit review.
I've been researching this topic in my spare time and would be happy to help. Do you have time to clarify a few points? Here are some thoughts and questions that came up as I reviewed your post:
Finally, I’d be interested to hear which of these questions or areas you find most compelling. Are there other questions or directions you’d like to explore? This will help me prioritize my efforts.
I had to write several new Python versions of the code to explore the problem before it clicked for me.
I understand the proof, but the closest I can get to a true intuition that B is bigger is:
I think the main think I want to remember is that "given" or "conditional on X" means that you use the unconditional probability distribution and throw out results not conforming to X, not that you substitute a different generating function that always generates events conforming to X.
Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.
There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.
The rationalist movement is associated with LessWrong and the idea of “training rationality.” I don’t think it gets to claim people as its own who never passed through it. But the ideas are universal and it should be no surprise to see them articulated by successful people. That’s who rationalists borrowed them from in the first place.
This model also seems to rely on an assumption that there are more than two viable candidates, or that voters will refuse to vote at all rather than a candidate who supports 1/2 of their policy preferences.
If there were only two candidates and all voters chose whoever was closest to their policy preference, both would occupy the 20% block, since the extremes of the party would vote for them anyway.
But if there were three rigid categories and either three candidates, one per category, or voters refused to vote for a candidate not in their preferred category, then the model predicts more extreme candidates win.
I'm torn between the two for American elections, because:
Yes, I agree it's worse. If ONLY a better understanding of statistics by Phd students and research faculty was at the root of our cultural confusion around science.
It’s not necessary for each person to personally identify the best minds on all topics and exclusively defer to them. It’s more a heuristic of deferring to the people those you trust most defer to on specific topics, and calibrating your confidence according to your own level of ability to parse who to trust and who not to.
But really these are two separate issues: how to exercise judgment in deciding who to trust, and the causes of research being “memetic.” I still say research is memetic not because mediocre researchers are blithely kicking around nonsense ideas that take on an exaggerated life of their own, but mainly because of politics and business ramifications of the research.
The idea that wine is good for you is memetic both because of its way of poking at “established wisdom” and because the alcohol industry sponsors research in that direction.
Similar for implicit bias tests, which are a whole little industry of their own.
Clinical trials represent decades of investment in a therapeutic strategy. Even if an informed person would be skeptical that current Alzheimer’s approaches are the way to go, businesses that have invested in it are best served by gambling on another try and hoping to turn a profit. So they’re incentivized to keep plugging the idea that their strategy really is striking at the root of the disease.
Acquired immune systems (antibodies, T cells) are restricted to jawed vertebrates.