You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Richard_Loosemore comments on AI safety in the age of neural networks and Stanislaw Lem 1959 prediction - Less Wrong Discussion

8 Post author: turchin 31 January 2016 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: Richard_Loosemore 31 January 2016 07:32:47PM 1 point [-]

The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.

But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.

Which renders your entire extrapolation moot.

Comment author: TheAncientGeek 01 February 2016 07:17:28PM 2 points [-]

virtually nothing

OK, but does anything survive? How about the idea that

  • Some systems will be opaque to human programmers

  • ...they will also be opaque to themselves

  • ..which will stymie recursive self-improvement.

Comment author: Richard_Loosemore 03 February 2016 05:40:32PM 1 point [-]

Well, here is my thinking.

Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.

But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time -- Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.

But here's the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.

Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one .... whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.

So the short answer to your question is: the opaqueness, at least, will not survive.

Comment author: Baughn 04 February 2016 02:59:21AM 1 point [-]

But here's the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.

Where can I read about this?

Comment author: Richard_Loosemore 31 January 2016 08:21:29PM 1 point [-]

Well, the critical point is whether NN are currently on a track to AGI. If they are not, then one cannot extrapolate anything. Compare: steam engine technology is also not going to eventually become AGI, so how would it look if someone wrote about the characteristics of steam engine technology and tried to predict the future of AGI based on those characteristics?

My own research (which started with NN, but tried to find ways to get it to be useful for AGI) is already well beyond the point where the statements you make about NN are of any relevance. Never mind what will be happening in 5, 10 or 20 years.

Comment author: turchin 31 January 2016 08:25:25PM 1 point [-]

It looks like you are on track to hard takeoff, but from other domains I know that people tend to overestimate their achievements 10-100 times, so I have to be a little bit sceptical. NN is much closer to AGI than steam engines anyway.

Comment author: turchin 31 January 2016 07:53:10PM 1 point [-]

I agree that NN will eventually evolve in something else and this will end NN age, which may last in my opinion 10-20 years, but may be as short as 5 years. After NN age will end, most of these assumption should be revisited, but now situation looks like we live in such age.