jacob_cannell comments on Concept Safety: Producing similar AI-human concept spaces - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
The paper you linked to is long-winded. I jumped to the section titled "Do Modules Require Their Own Genes?". I skimmed a bit and concluded that the authors were missing huge tracks of key recent knowledge from comp and developmental neuroscience and machine learning, and as a result they are fumbling in the dark.
Learning will automatically develop any number of specialized capabilities just as a natural organic process of interacting with the environment. Machine learning provides us with concrete specific knowledge of how this process actually works. The simplest explanation for autism inevitably involves disruptions to learning machinery, not disruptions to preconfigured "people interaction modules".
Again to reiterate - obviously there are preconfigured modules - it is just that they necessarily form a tiny portion of the total circuitry.
Perhaps, perhaps not. Certainly genetics specifies a prior over model space. You can think of evolution wanting to specify as much as it can, but with only a tiny amount of code. So it specifies the brain in a sort of ultra-compressed hierarchical fashion. The rough number of main modules, neuron counts per module, and gross module connectivity are roughly pre-specified, and then within each module there are a just a few types of macrocircuits, each of which is composed of a few types of repeating microcircuits, and so on.
Using machine learning as an analogy, to solve a specific problem we typically come up with a general architecture that forms a prior over model space that we believe is well adapted to the problem. Then we use a standard optimization engine - like SGD - to handle the inference/learning given that model. The learning algorithms are very general purpose and cross domain.
The distinction between the 'model prior' and the 'learning algorithm' is not always so clear cut, and some interesting successes in the field of metalearning suggest that there indeed exists highly effective specialized learning algorithms for at least some domains.