1 line summary is that NNs can transmit signals directly from any part of the network to any other, while brain has to work only locally.
More broadly I get the sense that there's been a bit of a shift in at least some parts of theoretical neuroscience from understanding how we might be able to implement brain-like algorithms to understanding how the local algorithms that the brain uses might be able to approximate backprop, suggesting that artificial networks might have an easier time than the brain and so it would make sense that we could make something which outcompetes the brain without a similar diversity of neural structures.
This is way outside my area tbh, working off just a couple of things like this paper by Beren Millidge https://arxiv.org/pdf/2006.04182.pdf and some comments by Geoffrey Hinton that I can't source.
So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that 'just' keep up with the scaling?
Specifically, do you think that self-reflective thought already emerges from adding those?
First, brains (and biological systems more generally) have many constraints that artificial networks do not. Brains exist in the context of a physically instantiated body, with heavy energy constraints. Further, they exist in specific niches, with particular evolutionary histories, which has enormous effects on structure and function.
Second, biological brains have different types of intelligence from AI systems, at least currently. A bird is able to land fluidly on a thin branch in windy conditions, while gpt4 can help you code. In general, the intelligences that one thinks of in the context of AGI do not totally overlap with the varied, often physical and metabolic, intelligences of biology.
All that being said, who knows what future AI systems will look like
Sure, it's not necessary that a sufficiently advance AI has to work like the brain, but there has to be an intuition about why is not need it to at least create an utility maximizer.
Octopus' brain(s) is nothing like that of mammals, and yet it is equally intelligent.
Yeah, but I would need more specificity than just giving an example of a brain with a different design.
without the apparent complexity of the brain structures that enable general intelligence in humans
can you specify what brain structures you mean by that? doesn't the process of neural network training just cause any useful complexity as a result of selecting for better performance on the training objective? (same as with human evolution)
Can you quote any source that provides evidence for that conclusion?
The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that's one reason why I said "apparent complexity", from Yudkowsky:
- "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
i don't have a source, it's just intuitive given that evolution is an example of a training process and human brains are neural networks.
If the general capabilities necessary for effective self-improvement or to directly get an AGI can be bridged without the apparent complexity of the brain structures that enable general intelligence in humans (just with memory, more data, compute and some algorithmic breakthroughs or even none), I wonder why those structures are not needed.
Sure, it's not necessary that a sufficiently advanced AI has to work like the brain, but there has to be an intuition about why those neural structures are not needed to at least create an autonomous utility maximizer if you are going to defend short timelines.