jacob_cannell comments on Steelmaning AI risk critiques - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (98)
Hardly. It can learn a wide variety of tasks - many at above human level - in a variety of environments - all with only a few million neurons. It was on the cover of Nature for a reason.
Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same - just smaller - and with different size allocations across modules.
From what I've read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed - it's massive brain damage. It's so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.
Not at all - relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn - it takes roughly a decade to learn up to a high fluent adult level. Children learn faster - they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language - although measuring the exact extent of damage to language centers is probably difficult .
Absolutely not - because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN - the wiring is random. The modules are all basically the same.