Phil Torres I was originally planning on writing a short reply to Avital Balwit’s recent critique of my critique of longtermism, but because her post gets almost everything wrong about my criticisms (and longtermism), I’m going to do what I did with Steven Pinker’s chapter on “Existential Threats” in Enlightenment...
It's been a year, but I finally wrote up my critique of "longtermism" (of the Bostrom / Toby Ord variety) in some detail. I explain why this ideology could be extremely dangerous -- a claim that, it seems, some others in the community have picked up on recently (which is...
PART 2: If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels, or exhort governments to rethink their nuclear weapons policies? Eat, drink, and be merry, for tomorrow we die! A 2013 survey in four English-speaking countries showed that among the respondents...
This is the first of three posts; the critique has been split into three parts to enhance readability, given the document's length. For the official publication, go here. Key findings: —> The first quarter or so of the chapter contains at least two quotes from other scholars that are taken...
Can anyone tell me what's wrong with the following "refutation" of the simulation argument? (I know this is a bit long -- my apologies! I also posted an earlier draft several months ago and got some excellent feedback. I don't see a flaw, but perhaps I'm missing something!) Consider the...
Here I argue that following the Maxipok rule could have truly catastrophic consequences. Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks." And finally, here I argue that a superintelligence singleton constitutes...