whpearson comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 16 August 2010 11:00:24AM *  0 points [-]

I'm not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven't been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.

These are problems such as

  • How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
  • How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).

Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.