You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 July 2014 06:04:21PM *  1 point [-]

For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:

http://www.isle.org/~langley/

His competing AGI conference series:

http://www.cogsys.org/

  • Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn't one true path to human-level intelligence, GOFAI or otherwise.

EDIT: Not that I think you shouldn't be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.

Comment author: Stuart_Armstrong 09 July 2014 09:32:56AM 0 points [-]

Thanks!