I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.
Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?
Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.
Thanks to Katja for her introduction and all of these good links.
One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.
For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.
Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.
"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.
Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?
Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?
What did you find most interesting in this week's reading?
I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.
What do you think of I. J. Good's argument? (p4)
If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Definitely! See Wikipedia and e.g. this book.
Thanks... I will check it out further!