I feel like this creates more misconceptions than it clears up. It's very dismissive of something that is really in the early phases of being studied.
The primary effect that reading this had on me was the change in state from [owning a cloak hadn't occurred to me] to [owning a cloak sounds awesome; i am unhappy that i hadn't thought of it on my own]
The definition of 'heritable' being underspecified (since you have to specify what population of environments you're considering) is not the same as being incoherent.
I agree. Good point.
So most of what you have written makes sense but there are some major issues with some parts.
heritability (a term that has an incoherent definition)
Can you expand on what you think about the definition is incoherent? This is a pretty standard term.
the whole gene vs. environment discussion is obsolete, in light of the findings of the past decade. Everything is gene-environment interaction.
The fact that many genes interact in a complicated way with the environment is not newly discovered. It doesn't change the fact that in some contexts genes or environment can matter more or less. For example, if one has a gene that codes for some form of mental retardation, in most cases, environment can't change that. (I say in most cases because there are a few exceptions especially related to issues related to trace nutrients or to bad reactions to specific compounds). Similarly, if someone has severe lead poisoning they are going to have pretty bad problems regardless of what the genes the person has.
The first two points you made while roughly valid connect to a more general issue- yes these studies have flaws, but just because a technique has flaws doesn't mean we can't use it to learn (especially when in this context the issues you bring up are well known to the researchers).
The answer to the question "what proportion of phenotypic variability is due to genetic variability?" always has the same answer: "it depends!" What population of environments are you doing this calculation over? A trait can go from close to 0% heritable to close to 100% heritable, depending on the range of environments in the sample. That's a definition problem. Further, what should we count as 'genetic'? Gene expression can depend on the environment of the parents, for example (DNA methylation, etc). That's an environmental inheritance. I just think there is an old way of talking about these things that needs to go away in light of current knowledge.
I agree with you that we still can learn a lot from these studies.
Adoption studies are biased toward the null of no parenting effect, because adoptive parents aren't randomly selected from the population of potential parents (they often are screened to be similar to biological parents).
Twin studies I think are particularly flawed when it comes to estimating heritability (a term that has an incoherent definition). Twins have a shared pre-natal environment. In some cases, they even share a placenta.
Plus, the whole gene vs. environment discussion is obsolete, in light of the findings of the past decade. Everything is gene-environment interaction.
wait, this isn't well done satire?
Informed consent bias in RCTs?
The problem of published research findings not being reliable has been discussed here before.
One problem with RCTs that has received little attention is that, due to informed consent laws and ethical considerations, subjects are aware that they might be receiving sham therapy. This differs from the environment outside of the research setting, where people are confident that whatever their doctor prescribes is what they will get from their pharmacist. I can imagine many ways in which subjects' uncertainty about treatment assignment could affect outcomes (adherence is one possible mechanism). I wrote a short paper about this, focusing out what we would ideally estimate if we could lie to subjects, versus what we actually can estimate in RCTs (link). Here is the abstract:
It is widely recognized that traditional randomized controlled trials (RCTs) have limited generalizability due to the numerous ways in which conditions of RCTs differ from those experienced each day by patients and physicians. As a result, there has been a recent push towards pragmatic trials that better mimic real-world conditions. One way in which RCTs differ from normal everyday experience is that all patients in the trial have uncertainty about what treatment they were assigned. Outside of the RCT setting, if a patient is prescribed a drug then there is no reason for them to wonder if it is a placebo. Uncertainty about treatment assignment could affect both treatment and placebo response. We use a potential outcomes approach to define relevant causal effects based on combinations of treatment assignment and belief about treatment assignment. We show that traditional RCTs are designed to estimate a quantity that is typically not of primary interest. We propose a new study design that has the potential to provide information about a wider range of interesting causal effects
Any thoughts on this? Is this a trivial technical issue or something worth addressing?
The selish gene theory was a good one, but wrong (see epigenetics).
I understand 'the selfish gene theory' to be the idea that we should expect to see genes whose 'effects' are such as to cause their own replication to be maximized, as opposed to promoting the survival/reproduction of the individual, group or species, whenever these goals differ.
This is almost a tautology, modulo the tricky business of defining the 'effects' of a particular gene.
I don't see how the existence of epigenetic inheritance has anything to do with it, especially as the selfish gene theory doesn't depend on genes being made of DNA, only that whatever they are, genes can preserve information indefinitely.
Genes just aren't as much of the story as we thought they were. Whether or not a gene increases fitness might depend on whether it is methylated or not, for example. Until recently, we didn't realize that there could be transgenerational transmittance of DNA methylation patterns due to environmental factors.
Error detection bias in research
I have had the following situation happen several times during my research career: I write code to analyze data; there is some expectation about what the results will be; after running the program, the results are not what was expected; I go back and carefully check the code to make sure there are no errors; sometimes I find an error
No matter how careful you are when it comes to writing computer code, I think you are more likely to find a mistake if you think there is one. Unexpected results lead one to suspect a coding error more than expected results do.
In general, researchers usually do have general expectations about what they will find (e.g., the drug will not increase risk of the disease; the toxin will not decrease risk of cancer).
Consider the following graphic:
Here, the green region is consistent with what our expectations are. For example, if we expect a relative risk (RR) of about 1.5, we might not be too surprised if the estimated RR is between (e.g.) 0.9 and 2.0. Anything above 2.0 or below 0.9 might make us highly suspicious of an error -- that's the red region. Estimates in the red region are likely to trigger serious coding error investigation. Obviously, if there is no coding error then the paper will get submitted with the surprising results.
View more: Next

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If you are not going to do an actual data analysis, then I don't think there is much point of thinking about Bayes' rule. You could just reason as follows: "here are my prior beliefs. ooh, here is some new information. i will now adjust my believes, by trying to weigh the old and new data based on how reliable and generalizable i think the information is." If you want to call epistemology that involves attaching probabilities to beliefs, and updating those probabilities when new information is available, 'bayesian' that's fine. But, unless you have actual data, you are just subjectively weighing evidence as best you can (and not really using Bayes' rule).
The thing that can be a irritating is when people then act as if that kind of reasoning is what bayesian statisticians do, and not what frequentist statisticians do. In reality, both types of statisticians use Bayes' rule when it's appropriate. I don't think you will find any statisticians who do not consider themselves 'bayesian' who disagree with the law of total probability.
If you are actually going to analyze data and use bayesian methods, you would end up with a posterior distribution (not simply a single probability). If you simply report the probability of a belief (and not the entire posterior distribution), you're not really doing conventional bayesian analysis. So, in general, I find the conventional Less Wrong use of 'bayesian' a little odd.