All of ChickCounterfly's Comments + Replies

Did evolution need to understand information encoding in the brain before it achieved full GI?

I don't think a high replication rate necessarily implies the experiments were boring. Suppose you do 10 experiments, but they're all speculative and unlikely to be true: let's say only one of them is looking at a true effect, BUT your sample sizes are enormous and you have a low significance cutoff. So you detect the one effect and get 9 nulls on the others. When people try to replicate them, they have a 100% success rate on both the positive and the negative results.

The fraction of attempts that will fail due to random chance depends on the power, and replicators tend to go for very high levels of power, so typically you'd have about 5% false negatives or so in the replications.

There's also a new contest on hypermind called "The Long Fork Project," predicting the impact of Trump vs Biden. $20k in prize money.

I don't think that paper allows any such estimate because it's based on published results, which are highly biased toward "significant" findings. It's why, for example, in psychology meta-analyses have effect sizes 3x larger than those of registered replications. For an estimate of the replicability of a field you need something like the Many Labs project (~54% replication, median effect size 1/4 of the original study).

2DirectedEvolution
Just glancing at that Many Labs paper, it's looking specifically at psych studies replicable through a web browser. Who knows to what extent that generalizes to psych studies more broadly, or to biomedical research? So it sounds like you're worried that a bunch of failed replication attempts got put in the file drawer, even after there was a published significant finding for the replication attempt to be pushing back against?

There is no real question about whether most published research findings are false or not. We know that's the case due to replication attempts. Ioannides' paper isn't really _about_ plugging in specific numbers, or showing that a priori that must be the case, so I think you're going at it from a slightly wrong angle.

4DirectedEvolution
From another of Ioannidis's own papers: If 44% of those unchallenged studies in turn replicated, then total replication rates would be 54%. Of course, Ioannidis himself gives a possible reason why some of these haven't been replicated: "Sometimes the evidence from the original study may seem so overwhelming that further similar studies are deemed unethical to perform." So perhaps we should think that more than 44% of the unchallenged studies would replicate. If we count the 16% that found relationships with weaker but still statistically significant effects as replications rather than failures to replicate, and add in 16% of the 24% of unchallenged studies, then we might expect that a total of 74% of biomedical papers in high-impact journals with over 1,000 citations have found a real effect. Is that legit? Well, it's his binary, not mine, and in WMPRFAF he's talking about the existence, not the strength, of relationships. Although this paper looked at highly-cited papers, Ioannidis also notes that "The current analysis found that matched studies that were not so highly cited had a greater proportion of “negative” findings and similar or smaller proportions of contradicted results as the highly cited ones." I.e. less-highly-cited findings have fewer problems with lack of replication. So that 74% is, if anything, most likely a lower bound on replication rates  in the biomedical literature more broadly. Ioannidis has refuted himself.

Great links, thanks.


The Augur launch has unfortunately been a complete catastrophe, as the high transaction costs of ETH right now make it so that simply making a trade costs about $30...I hope they manage to come up with some sort of solution.

Could you point out where he does that exactly? Here's the transcript: https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/

Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):

[Sam:] So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making pap
... (read more)

The whole thing hangs on footnote #4, and you don't seem to understand what realists actually believe. Of course they would dispute it, and not just "some" but most philosophers.

4Gordon Seidoh Worley
Right, the whole things seems like a rather strange confusion to me, since the is-ought gap is a problem certain kinds of anti-realists face but is not a problem for most realists since for them morals facts are still facts. So it seems to me, not being familiar with Harris, an alternative interpretation is that Harris is a moral realist and so believes there is no is-ought gap and thus this business with dialectical v. logical explanations is superfluous.
4Tyrrell_McAllister
Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.

> If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact.

The things you cite right before this sentence say the exact opposite. This is only possible give equal priors, and there's no reason to assume ratioanl and honest people would have equal priors about...anything.

>Lee Kuan Yew gained very strong individual power over a small country, and unlike the hundreds of times in the history of Earth when that went horribly wrong, Lee Kuan Yew happened to know some economics.

Actually, this isn't a one-off. Monarchies in general achieve superior economic results (https://twin.sci-hub.cc/6b4aea0cae94d2f4fd6c2e459dab6881/besley2017.pdf ):

>We assemble a unique dataset on leaders between 1874 and 2004 in which we classify them as hereditary leaders based on their family history. The core empirical finding is that econom... (read more)

4Charlie Steiner
I'm not super convinced. Their non-empirical part seems tautological, and their empirical part seems like they kind of tried to separate between the subgroups of democracy and "junta leader of the month" by calling one "strong executive constraints" and the other "weak executive constraints," but it's not obvious that you'd expect this method of separation to work (though not that when they did split the data this way, the "democracy-esque" subgroup had the best results). Not only are natural experiments thin on the ground here, but it's very hard to control for confounders, and I see no reason to expect they succeeded. Also, they cited Thomas Paine as from 1976.