What do you think of Kenzi's views?
It’s also possible that FAI might necessarily require the ability to form human-like moral relationships, not only with humans but also nature. Such a FAI might not treat the universe as its cosmic endowment, and any von Neumann probes it might send out might remain inconspicuous.
Like the great filter arguments, this would also reduce the probability of “rogue singletons” under the Fermi paradox (and also against oracles, since human morality is unreliable).
How plausible do you find the key points in this chapter? (see list above)
Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Is there anything particular you would like to do by the end of this reading group, other than read and discuss the last chapter?
It would be interesting to me to read others’ more free-ranging impressions of where Bostrom gets it right in Superintelligence – and what he may have missed or not emphasized enough.