whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.
I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.