The answer to the question "what proportion of phenotypic variability is due to genetic variability?" always has the same answer: "it depends!" What population of environments are you doing this calculation over? A trait can go from close to 0% heritable to close to 100% heritable, depending on the range of environments in the sample. That's a definition problem. Further, what should we count as 'genetic'? Gene expression can depend on the environment of the parents, for example (DNA methylation, etc). That's an environmental...
Adoption studies are biased toward the null of no parenting effect, because adoptive parents aren't randomly selected from the population of potential parents (they often are screened to be similar to biological parents).
Twin studies I think are particularly flawed when it comes to estimating heritability (a term that has an incoherent definition). Twins have a shared pre-natal environment. In some cases, they even share a placenta.
Plus, the whole gene vs. environment discussion is obsolete, in light of the findings of the past decade. Everything is gene-environment interaction.
It implies that people who reject their claims are not being real. I want to be a realist, but I certainly have seen no evidence that any particular race is more likely to commit unscrupulous acts if you control for environment (if that was even possible). It's a propaganda term, like '[my cause] realist.'
Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler," and thus the doctor doesn't kill the traveler.
TDT could deduce that people would deduce that TDT would not endorse the action, ...
Genes just aren't as much of the story as we thought they were. Whether or not a gene increases fitness might depend on whether it is methylated or not, for example. Until recently, we didn't realize that there could be transgenerational transmittance of DNA methylation patterns due to environmental factors.
The first one is flawed, IMO, but not for the reason you gave (and I wouldn't call it a 'trick'). The study design is flawed. They should not ask everyone "which is more probable?" People might just assume that the first choice, "Linda is a bank teller" really means "Linda is a bank teller and not active in the feminist movement" (otherwise the second answer would be a subset of the first, which would be highly unusual for a multiple choice survey).
The Soviet Union study has a better design, where people are randomized and only see one option and are asked how probable it is.
If you look at Table 2 in the paper, it shows doses of each vitamin for every study that is considered low risk for bias. I count 9 studies that have vitamin A <10,000 IU and vitamin E <300 IU, which is what PhilGoetz said are good dosage levels.
The point estimates from those 9 studies (see figure 2) are: 2.88, 0.18, 3.3, 2.11, 1.05, 1.02, 0.78, 0.87, 1.99. (1 favors control)
Based on this quick look at the studies, I don't see any reason to believe that a "hockey stick" model will show a benefit of supplements at lower dose levels.
It would be nice if the top scoring all-time posts really reflected their impact. Right now there is some bias towards newer posts. Plus, Eliezer's sequences appeared at OB first, which greatly reduced LW upvotes.
Possible solution: every time a post is linked to from a new post, it gets an automatic upvote (perhaps we don't count it if linked to by same author). I don't know if it's technically feasible
In cases where a scientist is using a software package that they are uncomfortable with, I think output basically serves as the only error checking. First, they copy some sample code and try to adapt it to their data (while not really understanding what the program does). Then, they run the software. If the results are about what they expected, they think "well, we most have done it right." If the results are different than they expected, they might try a few more times and eventually get someone involved who knows what they are doing.
Error finding: I strongly suspect that people are better at finding errors if they know there is an error.
For example, suppose we did an experiment where we randomized computer programmers into two groups. Both groups are given computer code and asked to try and find a mistake. The first group is told that there is definitely one coding error. The second group is told that there might be an error, but there also might not be one. My guess is that, even if you give both groups the same amount of time to look, group 1 would have a higher error identification success rate.
Does anyone here know of a reference to a study that has looked at that issue? Is there a name for it?
Thanks
Very good examples of perceptions driving self-selection.
It might be useful to discuss direct and indirect effects.
Suppose we want to compare fatality rates if everyone drove a Volvo versus if no one did. If the fatality rate was lower in the former scenario than in the latter, that would indicate that Volvo's (causally) decrease fatality rates.
It's possible that it is entirely through an indirect effect. For example, the decrease in the fatality rate might entirely be due to behavior changes (maybe when you get in a Volvo you think 'safety' and dri...
In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
How about spreading rationality?
This site, I suspect, mostly attracts high IQ analytical types who would have significantly higher levels of rationality than most people, even if they had never stumbled upon LessWrong.
It would be great if the community could come up with a plan (and implement it) to reach a wider audience. When I've sent LW/OB links to people who don't seem to think much about these topics, they often react with one of several criticisms: the post was too hard to read (written at too high of a level); the author was too arrogant (wh...
Perhaps a better title would be "Bayes' Theorem Illustrated (My Ways)"
In the first example you use shapes with colors of various sizes to illustrate the ideas visually. In the second example, you using plain rectangles of approximately the same size. If I was a visual learner, I don't know if your post would help me much.
I think you're on the right track in example one. You might want to use shapes that are easier to estimate the relative areas. It's hard to tell if one triangle is twice as big as another (as measured by area), but it's easie...
It seems to me that the standard solutions don't account for the fact that there are a non-trivial number of families who are more likely to have a 3rd child, if the first two children are of the same sex. Some people have a sex-dependent stopping rule.
P(first two children different sexes | you have exactly two children) > P(first two children different sexes | you have more than two children)
The other issue with this kind of problem is the ambiguity. What was the disclosure algorithm? How did you decide which child to give me information about? Without that knowledge, we are left to speculate.
We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.
I think you said it better earlier when you talked about whether the reduction in incidence outweighs the pain caused by the tactic. For some conditions, if it wasn't for the stigma there would be little-to-nothing unpleasant about it (and we wouldn't need to talk about reducing incidence).
I agree with your general principle, ...
Sorry I was slow to respond .. busy with other things
My answers:
Q1: I agree with you: 1/3, 1/3, 2/3
Q2. ISB is similar to SSB as follows: fair coin; woken up twice if tails, once if heads; epistemic state reset each day
Q3. ISB is different from SSB as follows: more than one coin toss; same number of interviews regardless of result of coin toss
Q4. It makes a big difference. She has different information to condition on. On a given coin flip, the probability of heads is 1/2. But, if it is tails we skip a day before flipping again. Once she has been wok...
The thing that I have been most surprised by is how much NTs like symbols and gestures.
Here are some examples:
Suppose you think your significant other should have a cake on his/her birthday. You are not good at baking. Aspie logic: "It's better to buy a cake from a bakery than to make it myself, since the better the cake tastes the happier they'll be." Of course, the correct answer is that the effort you put into it is what matters (to an NT).
Suppose you are walking through a doorway and you are aware that there is someone about 20 fee
In each of these 3 examples the person with AS is actually being considerate
I agreed with all of your comment but this: the person with AS is not "being considerate", when "being considerate" is defined to include modeling the likely preferences of the person you are supposedly "considering."
In each case, the "consideration" is considering themselves, in the other person's shoes, falling prey to availability bias.
Personally, I am very torn on the doorway example -- I usually make an effort to hold the door, but am...
Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.
If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.
Imagine that there are balls in an urn, labeled with numbers 1, 2,....
This is interesting. We shouldn't get a discontinuous jump.
Consider 2 related situations:
if Heads she is woken up on Monday, and the experiment ends on Tuesday. If tails, she is woken up on Monday and Tuesday, and the experiment ends on Wed. In this case, there is no 'not awake' option.
If heads she is woken up on Monday and Tuesday. On Monday she is asked her credence for heads. On Tuesday she is told "it's Tuesday and heads" (but she is not asked about her credence; that is, she is not interviewed). If tails, it's the usual woken up b
At this point, it is just assertion that it's not a probability. I have reasons for believing it's not one, at least, not the probability that people think it is. I've explained some of that reasoning.
I think it's reasonable to look at a large sample ratio of counts (or ratio of expected counts). The best way to do that, in my opinion, is with independent replications of awakenings (that reflect all possibilities at an awakening). I probably haven't worded this well, but consider the following two approaches. For simplicity, let's say we wanted to do...
The probability represents how she should see things when she wakes up.
She knows she's awake. She knows heads had probability 0.5. She knows that, if it landed heads, it's Monday with probability 1. She knows that, if it landed tails, it's either Monday or Tuesday. Since there is no way for her to distinguish between the two, she views them as equally likely. Thus, if tails, it's Monday with probability 0.5 and Tuesday with probability 0.5.
If you are not going to do an actual data analysis, then I don't think there is much point of thinking about Bayes' rule. You could just reason as follows: "here are my prior beliefs. ooh, here is some new information. i will now adjust my believes, by trying to weigh the old and new data based on how reliable and generalizable i think the information is." If you want to call epistemology that involves attaching probabilities to beliefs, and updating those probabilities when new information is available, 'bayesian' that's fine. But, unless you h... (read more)