Richard_Loosemore comments on AI safety in the age of neural networks and Stanislaw Lem 1959 prediction - Less Wrong

8 Post author: turchin 31 January 2016 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 31 January 2016 07:32:47PM 1 point [-]

The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.

But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.

Which renders your entire extrapolation moot.

Comment author: TheAncientGeek 01 February 2016 07:17:28PM 2 points [-]

virtually nothing

OK, but does anything survive? How about the idea that

  • Some systems will be opaque to human programmers

  • ...they will also be opaque to themselves

  • ..which will stymie recursive self-improvement.

Comment author: Richard_Loosemore 03 February 2016 05:40:32PM 1 point [-]

Well, here is my thinking.

Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.

But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time -- Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.

But here's the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.

Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one .... whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.

So the short answer to your question is: the opaqueness, at least, will not survive.

Comment author: Baughn 04 February 2016 02:59:21AM 1 point [-]

But here's the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.

Where can I read about this?