Correlation!=causation: returning to my old theme (latest example: is exercise/mortality entirely confounded by genetics?), what is the right way to model various comparisons?
By which I mean, consider a paper like "Evaluating non-randomised intervention studies", Deeks et al 2003 which does this:
...In the systematic reviews, 8 studies compared results of randomised and non-randomised studies across multiple interventions using metaepidemiological techniques. A total of 194 tools were identified that could be or had been used to assess non-randomised studies. 60 tools covered at least 5 of 6 pre-specified internal validity domains. 14 tools covered 3 of 4 core items of particular importance for non-randomised studies. 6 tools were thought suitable for use in systematic reviews. Of 511 systematic reviews that included nonrandomised studies, only 169 (33%) assessed study quality. 69 reviews investigated the impact of quality on study results in a quantitative manner. The new empirical studies estimated the bias associated with non-random allocation and found that the bias could lead to consistent over- or underestimations of treatment effects, also the bias increased variatio
I just published an article in the conservative FrontPageMag on college safe spaces. It uses a bit of LW like reasoning.
Last week was a gathering of physicists in Oxford to discuss string theory and the philosophy of science.
From the article:
Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism...
Gross concurred, saying that, upon learning about Bayesian confirmation theory from Dawid’s book, he felt “somewhat like the Molière character who said, ‘Oh my God, I’ve been talking prose all my life!’”
That the Bayesian view is news to so many physicists is itself news to me, and i...
The character from Molière learns a fancy name ("speaking in prose") for the way he already communicates. David Gross isn't saying that he is unfamiliar with the Bayesian view, he's saying that "Bayesian confirmation theory" is a fancy name for his existing epistemic practice.
The gap between the average Nobel laureate (in physics, say) and the average LWer is enormous. If your measure says it isn't, it's a crappy measure.
A major weakness
Where did you get this from? Maintaining beliefs over an entire space of possible solutions is a strength of the Bayesian approach. Please don't talk about Bayesian inference after reading a single thing about updating beliefs on whether a coin is fair or not. That's just a simple tutorial example.
How much do you trust economic data released by the Chinese government? I had assumed that economic indicators were manipulated, but recent discussion suggests it is just entirely fabricated, at least as bad as anything the Soviet Union reported. For example, China has reported a ~4.1% unemployment rate for over a decade. Massive global recession? 4.1% unemployment. Huge economic boom? 4.1% unemployment.
One of the largest, most important economies in the world, and I don't know that we can reliably say much about it at all.
One interesting point, not expanded up on, is this:
One writer chalks this concern up to a bunch of “conspiracy theor(ies)”.
Balding dismisses this by citing Premier Li Keqiang, but I think this objection illustrates a deeper problem with the way the phrase "conspiracy theory" is used. It's frequently used to dismiss any suggestion that someone in authority is behaving badly regardless of whether an actual conspiracy would be required.
Let's look at what it would take for Chinese economic data to be bad. The data is gathered by the central government by delegating gathering the data to appropriate individual branches, by province, industry, etc. So what happens if someone at that level decides to fudge with the data for whatever reason (possibly to make his province and/or industry look better). The aggregate data will be wrong. And that's just one person on one level. In reality, of course, there are many levels in the hierarchy and many corrupt people in all of them.
That was a bit... strange.
Huw Price, a professional philosopher who happens to be one of the founders and the Academic Director of the Centre for the Study of Existential Risk (the one in Cambridge, UK), wrote a piece which is quite optimistic about cold fusion in general and Andrea Rossi in particular.
I am confused about free will. I tried to read about it (notably from the sequences) but am still not convinced.
I make choices, all the time, sure, but why do I chose one solution in particular?
My answer would be the sum of my knoledge and past experiences (nurture) and my genome (nature), with quantum randomness playing a role as well, but I can't see where does free will intervene.
It feels like there is something basic I don't understand, but I can't grasp it.
Thoughts this week:
Career stategy
Thiel isn't decisive on the topic. Is the definite-optimist view is the dominant approach to candidacy in the grand marketplace of talent today?
Kumon
Kumon franchises are cheap. The branding and rep is good. Tutoring is a very attractive market in general and kumon makes it easier for the teachers. But is it ethical, I wonder? To me it's ethical if it delivers value to the students. A caveat is that it seemed cruel the kind of mind-numbing maths done by my classmates as a kid who attended Kumon.
Could somebody who has the English translation of The Spanish Ballad by Feuchtwanger post that piece about Lancelot being in disgrace over his hesitation to sit in the cart into rationality quotes thread? Thank you.
The Fed recently announced a small interest rate hike, but rates remain astonishingly low in the US and in most other countries. In several countries the interest rate is negative - you have to pay the bank to hold your money - a bizarre situation which many economists previously dismissed as a theoretical impossibility.
How should individuals respond to this weird macroeconomic situation? My naive analysis is that demand for investment opportunities far outstrips supply, so we should be trying to find new ways to invest money. Perhaps we should all be doing part-time real estate investing? Are there other simple investment strategies that individuals are in a better position to pursue than big investment firms?
If reports are correct, this is sort of an example of a transplant version of the Trolley problem in the wild: http://timesofindia.indiatimes.com/world/middle-east/Islamic-State-sanctioned-organ-harvesting-in-document-taken-in-US-raid/articleshow/50326036.cms
Where can I find The Browser's Golden giraffes competition nominees? They have deleted the list and I don't have an offline copy.
Thoughts this week, part 2
Sweat equity marketplaces
Anyone know why online sweat equity marketplaces never took off? Their website is basically non-functional. I can see the potential for sweat-equity marketplace focusing on a surprising number of fields - say cash strapped writers looking for an editor for instance.
Nuremburg principles
I was just following norms
-Normies the Normenberg trails for norm crimes
Love and subjective well-being
Love has too complex a relationship with happiness for me to want to try to make rational decisions in relation to (...
For every data point you know if it comes from the RCT or observational study. You don't need uncertainty about treatment assignment.
No, the uncertainty here isn't about which of the two studies a datapoint came from, but about whether (for a specific treatment/intervention) the correlational study datapoint was drawn from the same distribution as the randomized study datapoint or a different one, and (over all treatments/interventions) what the probability of being drawn from the same distribution is. Maybe it'll be a little clearer if I narrate how the model might go.
So say you start off with a prior probability of 50-50 about which group a result is drawn from, a switching probability that will be tweaked as you look at data. (If you are studying turtles which could be from a large or a small species, then if you find 2 larger turtles and 8 smaller, you're probably going to update from P=0.5 to a mixture probability more like P>0.20, since it's most likely - but not certain - that 1 or 2 of the larger turtles came from the large species and the 8 smaller ones came from the small species.)
For your first datapoint, you have a pair of results: xyzcillin reduces all-cause mortality to RR=0.5 from a correlational study (cohort, cross-sectional, case-control, whatever), and the randomized study of xyzcillin has RR=1.1. What does this mean? Now, of course you know that 0.5 is the correlational result and 1.1 is the randomized result, but we can imagine two relatively distinct scenarios here: 'xyzcillin actually works but the causal effect is really more like RR=0.7 and the randomized trial was underpowered', or, 'xyzcillin has no causal effect whatsoever on mortality and it's just a bunch of powerful confounds producing results like RR=0.6-0.8'. We observe that 1.1 supports the latter more, and we update towards 'xyzcillin has 0 effect' and now give 'non-causal scenarios are 55% likely', but not too much because the xyzcillin studies were small and underpowered and so they don't support the latter scenario that much.
Then for the next datapoint, 'abcmycin reduces lung cancer', we get a pair looking like 0.9 and 0.7, and we observe these large trials are very consistent with each other and so they highly support the former theory instead and we update towards 'abcmycin causally reduces lung cancer' and 'noncausal scenarios are 39% likely'.
Then for the third datapoint about defracic surgery for backpain, we again get consistency like d=0.7 and d=0.5 and we update the probability that 'defracic surgery reduces back pain' and also push even further 'noncausal scenarios are 36% likely" because their sample sizes were decent.
And we do update for each pair we finish, and after bouncing back and forth with each pair, we wind up with an estimate that Nature draws from the non-causal scenario 37% of the time (ie the switching probability of the mixture is p=0.37). And now we can use that as a prior in evaluating any new medicine or surgery.
If you have specific observational data you want to look at, email me if you want to chat more.
If you want to look at specific study-pairs, they're all listed & properly cited in the papers I've collated & provided fulltext links for. I suspect that the more advanced methods will require individual level patient data, which sadly only a very few studies will release, but perhaps you can still find enough of those to make it worth your while and analyze if Robins et al can get a publishable paper out of just 1 RCT.
If I understood you correctly, there are two separate issues here.
The first is what people call "transportability" (how to sensibly combine results of multiple studies if units in those studies aren't the same). People try all sorts of things (Gelman does random effects models I think?) Pearl's student Elias Barenboim (now at Purdue) thinks about that stuff using graphs.
I wish I could help, but I don't know as much about this subject as I want. Maybe I should think about it more.
The second issue is that in addition to units in two studies &quo...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.