Mark_Friedenbach comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 25 June 2014 02:26:41PM 0 points [-]

One imagines that software could do what humans do -- hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won't necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.

To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

Comment author: [deleted] 25 June 2014 03:37:09PM 1 point [-]

Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.

Comment author: [deleted] 25 June 2014 08:39:20PM 1 point [-]

Aaaaand now I have to go look at OpenCog again.