torekp comments on Concept Safety: Producing similar AI-human concept spaces - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
I agree that there seems to be good evidence for the 'one learning algorithm' hypothesis... but there also seems to be reasonable evidence for modules that are specialized for particular tasks that were evolutionary useful; the most obvious example would be the extent to which we seem to have specialized reasoning capacity for modeling and interacting with other people, capacity which is to varying extent impaired in people on the autistic spectrum.
Even if one does assume that the cortex used the same learning algorithms for literally everything, one would still expect the parameters and properties of those algorithms to be at least partially genetically tuned towards the kinds of learning tasks that were most useful in the EEA (though of course the environment should be expected to carry out further tuning of the said parameters). I don't think that the brain learning everything using the same algorithms would disprove the notion that there could exist alternative algorithms better optimized for learning e.g. abstract mathematics, and which could also employ a representation that was better optimized for abstract math, at the cost of being worse at more general learning of the type most useful in the EEA.
Compare jacob_cannell's earlier point that
Do we know or can we reasonably infer what those optimization criteria were like, so that we can implement them into our AI? If not, how likely and by how much would we expect the optimal solution to change?