I intended to bring it up as plausible, but not explicitly say that I thought it was p>0.5 (because it wasn't a firm belief and I didn't want others to do any bayesian update). I wanted to read arguments about its plausibility. (Some pretty convincing arguments are SBF's high level of luxury consumption and that he took away potentially all Alameda shares from the EA cofounder of Alameda, Tara Mac Aulay).
If it is plausible, even if it isn't p>0.5, then it's possible SBF wasn't selfish, in which case that's a reason for EA to focus more on inculcating philosophy in its members (whether the answer is "naive utilitarianism is wrong, use rule utilitarianism/virtue ethics/deontology" or "naive utilitarianism almost never advocates fraud", etcetera) (some old and new preventive measures like EA forum posts do exist, maybe that's enough or maybe not).
Someone on sneerclub said that he is falling on his sword to protect EA's reputation, I don't have a good counterargument to that.
This conversation won't go over well in court, so if he is selfish, then this conversation probably reflects mental instability.
"keeping (instead of a list of ideas for projects) a list"
This may be implied, but it may be helpful to be explicit if you mean "literally keep a list such as in an online document and/or a physical document".
I would take a look at “World Systems” theory as an idea behind the development of the modern balances of power and wealth.
Ironically, World Systems Theory is discredited in economics departments with similar reasoning as this criticism of Diamond=both ignore the established practice of an academic field and both explain things that never happened.
Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)
How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.
With regard to "How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?"
I think a possibly better question might be "How should you develop intellectually, in order to become the kind of person who would have considered both geocentrism and heliocentrism plausible with probability less than 0.5 and greater than 0.1 during the Copernican revolution?"
edit: May have caused confusion, alternative phrasing of same idea:
who would have considered geocentrism plausible with probability less than 0.5 and greater than 0.1 and would have considered heliocentrism plausible with probability less than 0.5 and greater than 0.1
Any idea why?
Is it possibly a deliberate strategy to keep average people away from the intellectual movement (which would result in an increased intellectual quality)? If so, I as an average person should probably respect this desire and stay away.
Possibly there should be 2 communities for intellectual movements: one community with a thickly walled garden to develop ideas with quality intellectuals, and a separate community with a thinly walled garden in order to convince a broader audience to drive adoption of those ideas?
RE "Should we then draw different conclusions from their experiments?"
I think, depending on the study's hypothesis and random situational factors, a study like the first can be in the garden of forking paths. A study which stops at n=100 when it reaches a predefined statistical threshold isn't guaranteed to have also reached that statistical threshold if it had kept running until n=900.
Suppose a community of researchers is split in half (this is intended to match the example in this article but increase the imagined sample size of studies to more than 1 study and its replication). The first half (non-replicators) of researchers conducts research as mentioned first in the article: they predefine a statistical threshold and stop the study when that threshold is reached. Additionally, if the threshold is not reached when n=1000, then the negative result of the study is published. The second half (replicators) of researchers only do replications of the first half's work, with the same sample size.
After the studies are done, in some cases, a non-replicator study will find a result and replicator study will find the opposite. In such cases, who is more likely to be correct? I think this article implies that the answer would be 50%, because it is supposedly the same study repeated twice. I do not think that is correct, I think the replicators are correct more than 50% of the time, because the non-replicator studies can be in the garden of forking paths.
The first section of this article explains the general idea of how early stopping can lead to bias: https://www.statisticsdonewrong.com/regression.html
My attempt to be more specific and somewhat mathematical:
For all studies, if we calculate the statistical result of a study for every n, then as n increases, the statistical result may or may not converge monotonically (for almost any study it will converge to something, since reality has regularities, but not necessarily monotonically). Rather, randomness can affect results and the statistical result can bounce a bit up and down between different n. In the case of the non-replicator researchers, for every n, the study can either continue or end, and those 2 options are 2 different paths in the garden of forking paths. If a study's results do not converge monotonically as n increases, then there may be outliers for certain specific n, where for a specific n (or a minority of n), the result of the study does pass the statistical threshold, but for all other n, it would not pass the statistical threshold.
'Nonmonotonic convergence' describes which studies are affected by the early stopping bias.
Post-script: If a study converges monotonically, then there is no problem with early stopping. However, even if your study had been converging monotonically for every previous n, that isn't an absolute guarantee that it would have continued converging monotonically as n increases. However, the larger the sample size, the larger your confidence.