I believe that AI research has not given sufficient attention to learning directly from biology, particularly through the direct observation and manipulation of neurons in controlled environments. Furthermore, even after learning all that biology has to offer, neurons could still play a part in the post TAI world economy as they could be cheaper and faster to grow than chips are to manufacture.
Pre TAI – study neurons to greatly increase learning capability
As I have said in other places on this site, I believe that the current transformer architecture will not scale to TAI, because it does not learn fast enough or generalize well enough from data compared to biology. For example, Tesla Autopilot has been trained on over 10,000 times more data than a human encounters in their lifetime, yet it still falls short of human-level performance. I don’t think this is because of anything Tesla is doing wrong in their training. Biology or the “neural code” is still much better at generalizing quickly from high bandwidth, correlated, unstructured data.
If we could learn the details of how biology does it, we would get a massive increase in capability. One of the most prominent examples of directly controlling neurons is Cortical Labs’ Dishbrain project. With the following article and quote
“Not only can you get meaningful learning, you get that meaningful learning incredibly rapidly, sort of in line with what you might expect for biological intelligence.”
As far as I am aware they are not directly trying to crack the neural code, but focusing on other applications, even providing an API where you can control neurons. Given the massive budgets now spent on getting to AGI, I believe there is a significant missed opportunity there. Characterizing how such neurons learn with a complete range of inputs and comparing to state of the art AI would clarify the differences.
Although it’s long been known that the brain adapts its structure to its inputs, experiments such as this provide further opportunities for valuable insights. The idea is that connectome scans are done at various stages of brain rewiring, and the amount of data required to get to such a point and resulting brain structures are quantified. This could give insight on how more complicated brain structures form than is possible with the smaller Dishbrain situations.
Post TAI – could neurons be cheaper and faster to grow than chips?
Post-TAI, neurons could remain highly useful. I think it helps to contrast technology and manufacturing to biology and growth in terms of speed, time and useful material in a 3d volume. While technology excels in providing cheap, strong materials like metals, biology is uniquely suited for creating complex three-dimensional structures through natural growth. If you create the structure by adding layers, then growth is linear, however if it happens by growth from within, it is more like exponential.
Also in terms of raw specs, you can compare data storage per volume for DNA in a cell to transistors in a chip. You can compare synapses per volume vs GPU’s with supporting hardware to get numbers that are much more favorable to biology than if using biology to create the usual outputs of industrial civilization.
Specifically, if there was large demand for the kind of computation that could be performed by neurons, then it is possible that demand could be met faster by growing neurons than building new fabs. A hybrid approach like Dishbrain could be used, I expect neurons would be especially useful for robotics. If AI were to develop superhuman capabilities within biology, such systems could be refined and scaled before new fabrication facilities could even be built. That would be a pretty ironic early Singularity outcome!
I believe that AI research has not given sufficient attention to learning directly from biology, particularly through the direct observation and manipulation of neurons in controlled environments. Furthermore, even after learning all that biology has to offer, neurons could still play a part in the post TAI world economy as they could be cheaper and faster to grow than chips are to manufacture.
Pre TAI – study neurons to greatly increase learning capability
As I have said in other places on this site, I believe that the current transformer architecture will not scale to TAI, because it does not learn fast enough or generalize well enough from data compared to biology. For example, Tesla Autopilot has been trained on over 10,000 times more data than a human encounters in their lifetime, yet it still falls short of human-level performance. I don’t think this is because of anything Tesla is doing wrong in their training. Biology or the “neural code” is still much better at generalizing quickly from high bandwidth, correlated, unstructured data.
If we could learn the details of how biology does it, we would get a massive increase in capability. One of the most prominent examples of directly controlling neurons is Cortical Labs’ Dishbrain project. With the following article and quote
As far as I am aware they are not directly trying to crack the neural code, but focusing on other applications, even providing an API where you can control neurons. Given the massive budgets now spent on getting to AGI, I believe there is a significant missed opportunity there. Characterizing how such neurons learn with a complete range of inputs and comparing to state of the art AI would clarify the differences.
Although it’s long been known that the brain adapts its structure to its inputs, experiments such as this provide further opportunities for valuable insights. The idea is that connectome scans are done at various stages of brain rewiring, and the amount of data required to get to such a point and resulting brain structures are quantified. This could give insight on how more complicated brain structures form than is possible with the smaller Dishbrain situations.
Post TAI – could neurons be cheaper and faster to grow than chips?
Post-TAI, neurons could remain highly useful. I think it helps to contrast technology and manufacturing to biology and growth in terms of speed, time and useful material in a 3d volume. While technology excels in providing cheap, strong materials like metals, biology is uniquely suited for creating complex three-dimensional structures through natural growth. If you create the structure by adding layers, then growth is linear, however if it happens by growth from within, it is more like exponential.
Also in terms of raw specs, you can compare data storage per volume for DNA in a cell to transistors in a chip. You can compare synapses per volume vs GPU’s with supporting hardware to get numbers that are much more favorable to biology than if using biology to create the usual outputs of industrial civilization.
Specifically, if there was large demand for the kind of computation that could be performed by neurons, then it is possible that demand could be met faster by growing neurons than building new fabs. A hybrid approach like Dishbrain could be used, I expect neurons would be especially useful for robotics. If AI were to develop superhuman capabilities within biology, such systems could be refined and scaled before new fabrication facilities could even be built. That would be a pretty ironic early Singularity outcome!