Comment author: lukeprog 16 September 2014 03:50:42AM 1 point [-]

Definitely! See Wikipedia and e.g. this book.

Comment author: VonBrownie 16 September 2014 04:28:11AM 1 point [-]

Thanks... I will check it out further!

Comment author: mvp9 16 September 2014 02:19:09AM 1 point [-]

I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.

Comment author: VonBrownie 16 September 2014 02:27:56AM *  4 points [-]

Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?

Comment author: SteveG 16 September 2014 01:53:29AM 11 points [-]

Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.

Thanks to Katja for her introduction and all of these good links.

One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.

For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.

Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.

Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.

"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.

If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.

Comment author: VonBrownie 16 September 2014 02:05:51AM 3 points [-]

Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?

Comment author: VonBrownie 16 September 2014 01:42:47AM 1 point [-]

Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?

Comment author: KatjaGrace 16 September 2014 01:21:38AM *  1 point [-]

What did you find most interesting in this week's reading?

Comment author: VonBrownie 16 September 2014 01:35:50AM 5 points [-]

I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.

Comment author: KatjaGrace 16 September 2014 01:07:30AM 1 point [-]

What do you think of I. J. Good's argument? (p4)

Comment author: VonBrownie 16 September 2014 01:23:04AM 2 points [-]

If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.