DirectedEvolution

Pandemic Prediction Checklist: H5N1

Pandemic Prediction Checklist: Monkeypox

I have lost my trust in this community’s epistemic integrity, no longer see my values as being in accord with it, and don’t see hope for change. I am therefore taking an indefinite long-term hiatus from reading or posting here.
 

Correlation does imply some sort of causal link.

For guessing its direction, simple models help you think.

Controlled experiments, if they are well beyond the brink

Of .05 significance will make your unknowns shrink.

Replications prove there's something new under the sun.

Did one cause the other? Did the other cause the one?

Are they both controlled by something already begun?

Or was it their coincidence that caused it to be done?

Wiki Contributions

Comments

Sorted by

I had to write several new Python versions of the code to explore the problem before it clicked for me.

I understand the proof, but the closest I can get to a true intuition that B is bigger is:

  • Imagine you just rolled your first 6, haven't rolled any odds yet, and then you roll a 2 or a 4.
  • In the consecutive-6 condition, it's quite unlikely you'll end up keeping this sequence, because you now still have to get two 6s before rolling any odds.
  • In the two-6 condition, you are much more likely to end up keeping this sequence, which is guaranteed to include at least one 2 or 4, and likely to include more than one before you roll that 6.

I think the main think I want to remember is that "given" or "conditional on X" means that you use the unconditional probability distribution and throw out results not conforming to X, not that you substitute a different generating function that always generates events conforming to X.

Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.

There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.

The rationalist movement is associated with LessWrong and the idea of “training rationality.” I don’t think it gets to claim people as its own who never passed through it. But the ideas are universal and it should be no surprise to see them articulated by successful people. That’s who rationalists borrowed them from in the first place.

This model also seems to rely on an assumption that there are more than two viable candidates, or that voters will refuse to vote at all rather than a candidate who supports 1/2 of their policy preferences.

If there were only two candidates and all voters chose whoever was closest to their policy preference, both would occupy the 20% block, since the extremes of the party would vote for them anyway.

But if there were three rigid categories and either three candidates, one per category, or voters refused to vote for a candidate not in their preferred category, then the model predicts more extreme candidates win.

I'm torn between the two for American elections, because:

  • The "correlated preferences" model here feels more true to life, psychologically.
  • Yet American politics goes from extremely disengaged primaries to a two-candidate FPTP general election, where the median voter theorem and the "correlated preferences" model seem to predict the same thing.
  • Voter turnout seems like a critically important part of democratic outcomes, and a model that only takes the order of policy preferences into account, rather than the intensity of those preferences, seems too limited.
  • Politicians often seem startlingly incompetent at inspiring the electorate, and it seems like we should think perhaps in "efficient market hypothesis" terms, where getting a political edge is extremely difficult because if anybody knew how to do it reliably, everybody would do it and the edge would disappear. In that sense, while both models can explain facets of candidate behavior and election outcomes, neither of them really offers a sufficiently detailed picture of elections to explain specific examples of election outcomes in a satisfying way. 

Yes, I agree it's worse. If ONLY a better understanding of statistics by Phd students and research faculty was at the root of our cultural confusion around science.

It’s not necessary for each person to personally identify the best minds on all topics and exclusively defer to them. It’s more a heuristic of deferring to the people those you trust most defer to on specific topics, and calibrating your confidence according to your own level of ability to parse who to trust and who not to.

But really these are two separate issues: how to exercise judgment in deciding who to trust, and the causes of research being “memetic.” I still say research is memetic not because mediocre researchers are blithely kicking around nonsense ideas that take on an exaggerated life of their own, but mainly because of politics and business ramifications of the research.

The idea that wine is good for you is memetic both because of its way of poking at “established wisdom” and because the alcohol industry sponsors research in that direction.

Similar for implicit bias tests, which are a whole little industry of their own.

Clinical trials represent decades of investment in a therapeutic strategy. Even if an informed person would be skeptical that current Alzheimer’s approaches are the way to go, businesses that have invested in it are best served by gambling on another try and hoping to turn a profit. So they’re incentivized to keep plugging the idea that their strategy really is striking at the root of the disease.

It's not evidence, it's just an opinion!

But I don't agree with your presumption. Let me put it another way. Science matters most when it delivers information that is accurate and precise enough to be decision-relevant. Typically, we're in one of a few states:

  • The technology is so early that no level of statistical sophistication will yield decision-relevant results. Example: most single-cell omics in 2024 that I'm aware of, with respect to devising new biomedical treatments (this is my field).
  • The technology is so mature that any statistics required to parse it are baked into the analysis software, so that they get used by default by researchers of any level of proficiency. Example: Short read sequencing, where the extremely complex analysis that goes into obtaining and aligning reads has been so thoroughly established that undergraduates can use it mindlessly.
  • The technology's in a sweet spot where a custom statistical analysis needs to be developed, but it's also so important that the best minds will do that analysis and a community norm exists that we defer to them. Example: clinical trial results.

I think what John calls "memetic" research is just areas where the topics or themes are so relevant to social life that people reach for early findings in immature research fields to justify their positions and win arguments. Or where a big part of the money in the field comes from corporate consulting gigs, where the story you tell determines the paycheck you get. But that's not the fault of the "median researcher," it's a mixture of conflicts of interest and the influence of politics on scientific research communication. 

In academic biomedicine, at least, which is where I work, it’s all about tech dev. Most of the development is based on obvious signals and conceptual clarity. Yes, we do study biological systems, but that comes after years, even decades, of building the right tools to get a crushingly obvious signal out of the system of interest. Until that point all the data is kind of a hint of what we will one day have clarity on rather than a truly useful stepping stone towards it. Have as much statistical rigor as you like, but if your methods aren’t good enough to deliver the data you need, it just doesn’t matter. Which is why people read titles, not figure footnotes: it’s the big ideas that really matter, and the labor going on in the labs themselves. Papers are in a way just evidence of work being done.

That’s why I sometimes worry about LessWrong. Participants who aren’t professionally doing research and spend a lot of time critiquing papers over niche methodological issues be misallocating their attention, or searching under the spotlight. The interesting thing is growth in our ability to measure and manipulate phenomena, not the exact analysis method in one paper or another. What’s true will eventually become crushingly obvious and you won’t need fancy statistics at that point, and before then the data will be crap so the fancy statistics won’t be much use. Obviously there’s a middle ground, but I think the vast majority of time is spent in the “too early to tell” or “everybody knows that” phase. If you can’t participate in that technology development in some way, I am not sure it’s right to say you are “outperforming” anything.

Sunglasses aren’t cool. They just tint the allure the wearer already has.

I doubt it’s regulation driving restaurant costs. Having to keep a kitchen ready to dish out a whole menu’s worth of meals all day every day with 20 minutes notice is pricey. Think what you’d have to keep in your kitchen to do that. It’s a different product from a home cooked meal.

Load More