I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I'll add them to the website.
Yes, I should have been more clear that I was addressing people who have very high p(doom). The prisoner/bomb is indeed somewhat of a simplification, but I do think there's a valid connection in the form of half-heartedly attempting to get the assistance of people more powerful than you and prematurely giving it up as hopeless.
Thank you for your kind words! I was expecting most reactions to be fairly anti-"we should", but I figured it was worth a try.
Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:
Honestly I don't think fake stories are even necessary, and becoming associated with fake news could be very bad for us. I don't think we've seriously tried to convince people of the real big bad AI. What, two podcasts and an opinion piece in Time? We've never done a real media push but all indications are that people are ready to hear it. "AI researchers believe there's a 10% chance they'll end life" is all the headline you need.
I honestly can't say. I wish I could.