Comment author: KatjaGrace 24 March 2015 03:08:16AM 2 points [-]

Is there anything particular you would like to do by the end of this reading group, other than read and discuss the last chapter?

Comment author: RobbyRe 25 March 2015 02:42:32PM 1 point [-]

It would be interesting to me to read others’ more free-ranging impressions of where Bostrom gets it right in Superintelligence – and what he may have missed or not emphasized enough.

Comment author: KatjaGrace 17 March 2015 01:35:07AM 3 points [-]

What do you think of Kenzi's views?

Comment author: RobbyRe 19 March 2015 08:14:54PM 1 point [-]

It’s also possible that FAI might necessarily require the ability to form human-like moral relationships, not only with humans but also nature. Such a FAI might not treat the universe as its cosmic endowment, and any von Neumann probes it might send out might remain inconspicuous.

Like the great filter arguments, this would also reduce the probability of “rogue singletons” under the Fermi paradox (and also against oracles, since human morality is unreliable).

Comment author: KatjaGrace 10 March 2015 02:08:41AM 3 points [-]

How plausible do you find the key points in this chapter? (see list above)

Comment author: RobbyRe 13 March 2015 04:43:22PM 1 point [-]

Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?