No. Open available knowledge is not enough to obtain decisive advantage. For this close cooperation with humans and human led organizations is absolutely necessary. Trust building will take years even for AGIs. In the mean time competing AGIs will appear.
Ben Goertzel does not want to waste time debating any more - he pushes open AGI development to prevent any hardware overhang. Other readers of Bostrums book might start other projects against singleton AI development. We do not have a ceteris paribus condition - we can shape what the default outcome will be.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.
"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."
http://nthlook.wordpress.com/2014/11/26/why-fear-ai/