Voltairina comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Voltairina 28 March 2012 07:07:50AM *  0 points [-]

If beating other researchers to generating AI is important, it might also be best to be able to beat other non-friendly AI at the intelligence advancing race should another one come online at the same time as this FAI, on the assumption that the time when you have gotten the technology and knowhow together may either be somewhat after or very close to the time someone else develops an AI as well. You'd want to find some way to provide the 'newborn' with enough computing power and access to firepower to beat the other AI either by exterminating it or outracing it. That's IF we even can know whether it IS friendly. And if it isn't friendly we basically want it to be in a black box with no way of communicating with it. Developing a self improving intelligence is daunting.