IABI says: "Transistors, a basic building block of all computers, can switch on and off billions of times per second; unusually fast neurons, by contrast, spike only a hundred times per second. Even if it took 1,000 transistor operations to do the work of a single neural spike, and even if artificial intelligence was limited to modern hardware, that implies human-quality thinking could be emulated 10,000 times faster on a machine— to say nothing of what an AI could do with improved algorithms and improved hardware.
@EigenGender says "aahhhhh this is not how any of this works" and calls it an "egregious error". Another poster says it's "utterly false."
(Relevant online resources text.)
(Potentially relevant LessWrong post.)
I am confused what the issue is, and it would be awesome if someone can explain it to me.
Where I'm coming from, for context:
* We don't know exactly what the relevant logical operations in the human brain are. The model of the brain that says there are binary spiking neurons that have direct connections from synapse->dendrite and that those connections are akin to floating-point numerical weights is clearly a simplification, albeit a powerful one. (IIUC "neural nets" in computers discard the binary-spikes and suggest another model where the spike-rate is akin to a numerical value, which is the basic story behind "neuron activation" in a modern system. This simplification also seems powerful, though it is surely an oversimplification in some ways.)
* My main issue with the source text is that it ignores what is possibly the greater bottleneck in processing speed, which is the time it takes to move information from one area to another. (If my model is right, one of the big advantages of a MoE architecture is to reduce the degree of thrashing weights across the bus to and from the GPU as much, which can be a major bottleneck.) However, on this front I think nerves are still clearly inferior to wires? Even mylenated neurons have a typical spe