This is to announce a $250 prize for spotchecking or otherwise indepth reviewing Jacob Cannell's technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To Know
I've been quite impressed by Jake's analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important. That said I have to admit I personally cannot asses whether the analysis is correct. This is why I am announcing this prize.
Whether Jake's claims concerning DOOM & FOOM really follow from his analysis is up for debate. Regardless, to me it seems to have large implications on how the future might go and how future AI will look like.
- I will personally judge whether I think an entry warrants a prize.[1]
- If you are also interested in seeing this situation resolved, I encourage you to increase the prize pool!
EDIT: some clarifications
- You are welcome to discuss DOOM& FOOM and the relevance or lack thereof of Jake's analysis but note I will only consider (spot)checking of Jacob Cannel's technical claims.
- in case of multiple serious entries I will do my best to fairly split the prize money.
- note I will not be judging who will be right. Instead, I will judge whether the entry has seriously engaged with Jacob Cannell's technical claims in a way that moves the debate forward. That is I will reward points for 'pushing the depth of the debate tree' beyond what it was before.
- by technical claims I mean to encompass all technical claims made in the brain efficiency post, broadly construed, as well as claims made by Jacob Cannell in other posts/ comments.
These claims includes especially: limits to energy efficiency, interconnect losses, Landauer limit, convection vs blackbody radiation, claims concerning the effective working memory of the human brain versus that of computers, end of Moore's law, CPU vs GPU vs neuromorphic chips, etc etc.
Here's Jacob Cannell's own summary of his claims:
1.) Computers are built out of components which are also just simpler computers, which bottoms out at the limits of miniaturization in minimal molecular sized (few nm) computational elements (cellular automata/tiles). Further shrinkage is believed impossible in practice due to various constraints (overcoming these constraints if even possible would require very exotic far future tech).
2.) At this scale the landauer bound represents the ambient temperature dependent noise (which can also manifest as a noise voltage). Reliable computation at speed is only possible using non-trivial multiples of this base energy, for the simple reasons described by landauer and elaborated on in the other refs in my article.
3.) Components can be classified as computing tiles or interconnect tiles, but the latter is simply a computer which computes the identity but moves the input to an output in some spatial direction. Interconnect tiles can be irreversible or reversible, but the latter has enormous tradeoffs in size (ie optical) and or speed or other variables and is thus not used by brains or GPUs/CPUs.
4.) Fully reversible computers are possible in theory but have enormous negative tradeoffs in size/speed due to 1.) the need to avoid erasing bits throughout intermediate computations, 2.) the lack of immediate error correction (achieved automatically in dissipative interconnect by erasing at each cycle) leading to error build up which must be corrected/erased (costing energy), 3.) high sensitivity to noise/disturbance due to 2
And the brain vs computer claims:
5.) The brain is near the pareto frontier for practical 10W computers, and makes reasonably good tradeoffs between size, speed, heat and energy as a computational platform for intelligence
6.) Computers are approaching the same pareto frontier (although currently in a different region of design space) - shrinkage is nearing its end
- ^
As an example, DaemonicSigil's recent post is in the right direction.
However, after reading Jacob Cannell's response I did not feel the post seriously engaged with the technical material, retreating to the much weaker claim that maybe exotic reversible computation could break the limits that Jacob's posits which I found unconvincing. The original post is quite clear that the limits are only for nonexotic computing architectures.
Ethernet cables are twisted pair and will probably never be able to go that fast. You can get above 10 GHz with rigid coax cables, although you still have significant attenuation.
Let's compute heat loss in a 100 m LDF5-50A, which evidently has 10.9 dB/100 m attenuation at 5 GHz. This is very low in my experience, but it's what they claim.
Say we put 1 W of signal power at 5 GHz in one side. Because of the 10.9 dB attenuation, we receive 94 mW out the other side, with 906 mW lost to heat.
The Shannon-Hartley theorem says that we can compute the capacity of the wire as C=Blog2(1+SN) where B is the bandwidth, S is received signal power, and N is noise power.
Let's assume Johnson noise. These cables are rated up to 100 C, so I'll use that temperature, although it doesn't make a big difference.
If I plug in 5 GHz for B, 94 mW for S and kB(370K)(5GHz)≈2.5×10−11W for N then I get a channel capacity of 160 GHz.
The heat lost is then (906mW)/(160GHz)/(100m)≈0.05fJ/bit/mm. Quite low compared to Jacob's ~10 fJ/mm "theoretical lower bound."
One free parameter is the signal power. The heat loss over the cable is linear in the signal power, while the channel capacity is sublinear, so lowering the signal power reduces the energy cost per bit. It is 10 fJ/bit/mm at about 300 W of input power, quite a lot!
Another is noise power. I assumed Johnson noise, which may be a reasonable assumption for an isolated coax cable, but not for an interconnect on a CPU. Adding an order of magnitude or two to the noise power does not substantially change the final energy cost per bit (0.05 goes to 0.07), however I doubt even that covers the amount of noise in a CPU interconnect.
Similarly, raising the cable attenuation to 50 dB/100 m does not even double the heat loss per bit. Shannon's theorem still allows a significant capacity. It's just a question of whether or not the receiver can read such small signals.
The reason that typical interconnects in CPUs and the like tend to be in the realm of 10-100 fJ/bit/mm is because of a wide range of engineering constraints, not because there is a theoretical minimum. Feel free to check my numbers of course. I did this pretty quickly.