You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Epictetus comments on Open Thread, Jul. 6 - Jul. 12, 2015 - Less Wrong Discussion

5 Post author: MrMind 06 July 2015 07:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (142)

You are viewing a single comment's thread.

Comment author: Epictetus 07 July 2015 06:09:27AM *  4 points [-]

Recently came across Valiant's A Theory of the Learnable. Basically, it covers a method of machine learning in the following way: if there's a collection of objects which either possess some property P or do not, then you can teach a machine to recognize this with arbitrarily small error simply by presenting it with randomly selected objects and saying whether they possess P. The learner may give false positives, but will not give a false negative. Perhaps the following passage best illustrates the concept:

Consider a world containing robots and elephants. Suppose that one of the robots has discovered a recognition algorithm for elephants that can be meaningfully expressed in k-conjunctive normal form. Our Theorem A implies that this robot can communicate its algorithm to the rest of the robot population by simply exclaiming "elephant" whenever one appears.

The mathematics are done in terms of Boolean functions and "k-conjunctive normal form" is a certain technical condition.

What struck me was that the learning could take place without the learner knowing the definition of the concept to be learned. That a thing could be identified with probability arbitrarily close to 1 without the learner necessarily being able to formulate a definition. I was reminded of the judge who said that he could not define pornography, but he knew it when he saw it. There are plenty of other concepts I can think of where identification is easy (most of the time at least) but which defy precise definition.

I'm usually wary of applying scientific results to philosophy, especially where I'm not an expert. Any expert input on whether this is a fair interpretation of the subject would be appreciated.

Comment author: Houshalter 08 July 2015 12:23:34AM 1 point [-]

EY made a whole series of posts about this idea. A Human's Guide to Words. Specifically see the Cluster Structure of Thingspace.

Comment author: MrMind 07 July 2015 07:14:28AM 0 points [-]

I've yet to read the paper, so forgive me if it's repeated there (but I will read it), but the situation is similar to NP problems: hard to find a solution, easy to check a solution.
Maybe concepts are NP problems encoded in our brains.