Yes, I've read your big universal learner post, and I'm not convinced.
Do you actually believe that evolved modularity is a better explanation of the brain then the ULM hypothesis? Do you have evidence for this belief or is it simply that which you want to be true? Do you understand why the computational neuroscience and machine learning folks are moving away from the latter towards the former? If you do have evidence please provide it in a critique in the comments for that post where I will respond.
First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research.
Make some specific predictions for the next 5 years about deep learning or ANNs. Let us see if we actually have significant differences of opinion. If so I expect to dominate you in any prediction market or bets concerning the near term future of AI.
First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.
Humans can learn echolocation, but they can't learn echolocation the way bats and dolphins can learn echolocation
What the scottsman!
The real test here would be to take a brain and give it an entirely new sense
Notably, the general learner hypothesis does not explain why non-surgically-modified brains are so standardized in structure and functional layout. Something that you yourself bring up in your article.
I discussed this in the comments - it absolutely does explain neurotypical standardization. It's a result of topographic/geometric wiring optimization. There is an exactly optimal location for every piece of functionality, and the brain tends to find those same optimal locations in each human. But if you significantly perturb the input sense or the brain geometry, you can get radically different results.
Consider the case of extreme hydrocephaly - where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above average IQs. Optimal dynamic wiring can explain this - the brain is constantly doing global optimization across the wiring structure, adapting to even extreme deformations and damage. How does evolved modularity explain this?
It also obviously has hard-coded specialized modules, to some degree, which is why (for example) all human cultures develop language and music, which isn't something you'd expect if we were all starting from zero.
This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.
There is a small amount of innate circuit structures - mainly in the brainstem, which can generate innate algorithms especially for walking behavior.
The question is which aspect dominates brain performance.
This is rather obvious - it depends on the ratio of pure learning structures (cortex, hippocampus, cerebellum) to innate circuit structures (brainstem, some midbrain, etc). In humans 95% or more of the circuitry is general purpose learning.
What about Watson?
Not an AGI.
Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you).
The correct thing to do here is update. Instead you are searching for ways in which you can ignore the evidence.
But, taking it as a given that you're right, it sounds like you're assuming all future AIs will draw the same amount of power as a real brain and fit in the same spatial footprint.
Obviously not - in theory given a power budget you can split it up into N AGIs or one big AGI. In practice due to parallel scaling limitations, there is always some optimal N. Even on a single GPU today, you need N about 100 or more to get good performance.
You can't just invest all your energy into one big AGI and expect better performance - that is a mind numbingly naive strategy.
To sum up: yes, I've read your thing. No, it's not as convincing as you seem to believe.
Update, or provide counter evidence, or stop wasting my time.
In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN.
People have been using ANNs for reinforcement learning tasks since at least the TD-Gammon system with varying success. The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.
...Consider the case of extreme hydrocephaly - where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above
At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...