This is to announce a $250 prize for spotchecking or otherwise indepth reviewing Jacob Cannell's technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To Know
I've been quite impressed by Jake's analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important. That said I have to admit I personally cannot asses whether the analysis is correct. This is why I am announcing this prize.
Whether Jake's claims concerning DOOM & FOOM really follow from his analysis is up for debate. Regardless, to me it seems to have large implications on how the future might go and how future AI will look like.
- I will personally judge whether I think an entry warrants a prize.[1]
- If you are also interested in seeing this situation resolved, I encourage you to increase the prize pool!
EDIT: some clarifications
- You are welcome to discuss DOOM& FOOM and the relevance or lack thereof of Jake's analysis but note I will only consider (spot)checking of Jacob Cannel's technical claims.
- in case of multiple serious entries I will do my best to fairly split the prize money.
- note I will not be judging who will be right. Instead, I will judge whether the entry has seriously engaged with Jacob Cannell's technical claims in a way that moves the debate forward. That is I will reward points for 'pushing the depth of the debate tree' beyond what it was before.
- by technical claims I mean to encompass all technical claims made in the brain efficiency post, broadly construed, as well as claims made by Jacob Cannell in other posts/ comments.
These claims includes especially: limits to energy efficiency, interconnect losses, Landauer limit, convection vs blackbody radiation, claims concerning the effective working memory of the human brain versus that of computers, end of Moore's law, CPU vs GPU vs neuromorphic chips, etc etc.
Here's Jacob Cannell's own summary of his claims:
1.) Computers are built out of components which are also just simpler computers, which bottoms out at the limits of miniaturization in minimal molecular sized (few nm) computational elements (cellular automata/tiles). Further shrinkage is believed impossible in practice due to various constraints (overcoming these constraints if even possible would require very exotic far future tech).
2.) At this scale the landauer bound represents the ambient temperature dependent noise (which can also manifest as a noise voltage). Reliable computation at speed is only possible using non-trivial multiples of this base energy, for the simple reasons described by landauer and elaborated on in the other refs in my article.
3.) Components can be classified as computing tiles or interconnect tiles, but the latter is simply a computer which computes the identity but moves the input to an output in some spatial direction. Interconnect tiles can be irreversible or reversible, but the latter has enormous tradeoffs in size (ie optical) and or speed or other variables and is thus not used by brains or GPUs/CPUs.
4.) Fully reversible computers are possible in theory but have enormous negative tradeoffs in size/speed due to 1.) the need to avoid erasing bits throughout intermediate computations, 2.) the lack of immediate error correction (achieved automatically in dissipative interconnect by erasing at each cycle) leading to error build up which must be corrected/erased (costing energy), 3.) high sensitivity to noise/disturbance due to 2
And the brain vs computer claims:
5.) The brain is near the pareto frontier for practical 10W computers, and makes reasonably good tradeoffs between size, speed, heat and energy as a computational platform for intelligence
6.) Computers are approaching the same pareto frontier (although currently in a different region of design space) - shrinkage is nearing its end
- ^
As an example, DaemonicSigil's recent post is in the right direction.
However, after reading Jacob Cannell's response I did not feel the post seriously engaged with the technical material, retreating to the much weaker claim that maybe exotic reversible computation could break the limits that Jacob's posits which I found unconvincing. The original post is quite clear that the limits are only for nonexotic computing architectures.
There's two types of energy associated with a current we should distinguish. Firstly there's the power flowing through the circuit, then there's energy associated with having current flowing in a wire at all. So if we're looking at a piece of extension cord that's powering a lightbulb, the power flowing through the circuit is what's making the lightbulb shine. This is governed by the equation P=IV. But there's also some energy associated with having current flowing in a wire at all. For example, you can work out what the magnetic field should be around a wire with a given amount of current flowing through it and calculate the energy stored in the magnetic field. (This energy is associated with the inductance of the wire.) Similarly, the kinetic energy associated with the electron drift velocity is also there just because the wire has current flowing through it. (This is typically a very small amount of energy.)
To see that these types have to be distinct, think about what happens when we double the voltage going into the extension cord and also double the resistance of the lightbulb it's powering. Current stays the same, but with twice the voltage we now have twice the power flowing to the light bulb. Because current hasn't changed, neither has the magnetic field around the wire, nor the drift velocity. So the energy associated with having a current flowing in this wire is unchanged, even though the power provided to the light bulb has doubled. The important thing about the drift velocity in the context of P=IV is that it moves charge. We can calculate the potential energy associated with a charge in a wire as E=qV, and then taking the time derivative gives the power equation. It's true that drift velocity is also a velocity, and thus the charge carriers have kinetic energy too, but this is not the energy that powers the light bulb.
In terms of exponential attenuation, even DC through resistors gives exponential attenuation if you have a "transmission line" configuration of resistors that look like this:
So exponential attenuation doesn't seem too unusual or surprising to me.