New replication: I find that the results in Moretti (AER 2021) are caused by coding errors. The paper studies agglomeration effects for innovation (do bigger cities cause technological progress?), but the results supporting a causal interpretation don't hold up.
https://twitter.com/michael_wiebe/status/1749462957132759489
What was the effect of reservists joining the protests? This says: "Some 10,000 military reservists were so upset, they pledged to stop showing up for duty." Does that mean they were actively 'on strike' from their duties? It looks like they're now doing grassroots support (distributing aid).
Yeah, I do reanalysis of observational studies rather than rerunning experiments.
Do you have any specific papers in mind?
But isn't it problematic to start the analysis at "superhuman AGI exists"? Then we need to make assumptions about how that AGI came into being. What are those assumptions, and how robust are they?
Why start the analysis at superhuman AGI? Why not solve the problem of aligning AI for the entire trajectory from current AI to superhuman AGI?
Also came here to say that 'latter' and 'former' are mixed up.
In particular, we should be interested in how long it will take for AGIs to proceed from human-level intelligence to superintelligence, which we’ll call the takeoff period.
Why is this the right framing? Why not focus on the duration between 50% human-level and superintelligence? (Or p% human-level for general p.)
Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.
https://twitter.com/michael_wiebe/status/1750572525439062384