Darwinian Traps and Existential Risks
> This part 1 in a 3-part sequence summarizes my book, The Darwinian Trap (see part 2 here and part 3 here). The book aims to popularize the concept of multipolar traps and establish them as a broader cause area. If you find this series intriguing contact me at kristian@kristianronn.com if you have any input or ideas. Global coordination stands as arguably the most critical challenge facing humanity today, functioning both as a necessary component for solving existential risks and as a significant barrier to effective mitigation. From nuclear proliferation to artificial intelligence development and climate change, our inability to collaborate effectively on a global scale not only exacerbates these threats but also perpetuates the emergence of new systemic vulnerabilities if left unaddressed. In this sequence, I will argue that the root of this coordination problem lies in the very mechanisms that shaped our species: natural selection. This evolutionary process, operating as a trial-and-error optimization algorithm, prioritizes immediate survival and reproduction over long-term, global outcomes. As a result, our innate tendencies often favor short-term gains and localized benefits, even when they conflict with the greater good of our species and planet. The inherent limitations of natural selection in predicting future optimal states have left us ill-equipped to handle global-scale challenges. In a world of finite resources, competition rather than cooperation has often been the more adaptive trait, leading to the emergence of self-interested behaviors that arguably dominate modern societies. This evolutionary legacy manifests in the form of nationalistic tendencies, economic rivalries, dangerous arms races and a general reluctance to sacrifice immediate benefits for long-term collective gains. This three-part series summarizes my book: The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future). * Part 1 (the pa
It doesn't disprove the doomsday argument. It does offer an alternative explanation however.
Don't think they necessarily care about a specific time period. I think they care about: can they learn how the simulated beings interact with a new technology in a way that prevents them to repeat or mistakes. And it could be the case that our particular time is the most efficient to learn from (i.e. the time that happens right before you might go extinct).