Optimal Timing for Superintelligence: Mundane Considerations for Existing People
[Sorry about the lengthiness of this post. I recommend not fixating too much on all the specific numbers and the formal apparatus. Originally the plan was to also analyze optimal timing from an impersonal (xrisk-minimization) perspective; but to prevent the text from ballooning even more, that topic was set aside...
In other papers (e.g. Existential Risks (2001), Astronomical Waste (2003), and Existential Risk Prevention as a Global Priority (2013)) I focus mostly on what follows from a mundane impersonal perspective. Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated? (Some people who hate longtermism have done so, quite vehemently.) But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it's usually a healthy practice.