The existence of invasive species proves that, at any given time, there are probably loads of possible biological niches that no animal is exploiting.
I believe that plants are ≳ 1 OOM below the best human solution for turning solar energy into chemical energy, as measured in power conversion efficiency.[1] (Update: Note that Jacob is disputing this claim and I haven’t had a chance to get to the bottom of it. See thread below.) (Then I guess someone will say “evolution wasn't optimizing for just efficiency; it also has to get built using biologically-feasible materials and processes, and be compatible with other things happening in the cell, etc.” And then I'll reply, “Yeah, that's the point. Human engineers are trying to do a different thing with different constraints.”)
The best human solution would be a 39%-efficient quadruple-junction solar cell, wired to a dedicated electrolysis setup. The electrolysis efficiency seems to be as high as 95%? Multiply those together and we get ≈10× the “peak” plant efficiency number mentioned here, with most plants doing significantly worse than that. ↩︎
Do you have a source in mind for photosynthesis efficiency?
According to this source some algae have photosynthetic efficiency above 20%:
On the other hand, studies have shown the photosyntheticefficiency of microalge could well be in the range of 10–20 % or higher (Huntley and Redalje2007). Simple structure of algae allows them to achieve substantially higher PE valuescompared to terrestrial plants. PE of different microalgal species has been given in (Table 2)
...Pirt et al. (1980) have suggested that even higher levels of PE can be attained by microalgae
I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?
Biological cells are robots that must perform myriad physical computations, all of which are tightly constrained by the thermodynamic Landauer Limit. This applies to all the critical operations of cells including DNA/cellular replication, methylation, translation, etc.
The lower Landauer bound is 0.02 eV, which translates into a minimal noise voltage of 20mV. Ion flows in neural signals operate on voltage swings around 100mV, close to the practical limits at low reliability levels.
The basic currency of chemical energy in biology is ATP, which is equivalent to about 1e-19J or roughly 1 eV, the practical limit for reliable computation. Proteins can perform various reliable computations from single or few ATP transactions, including transcription.
A cell has shared read only storage through the genome, and then a larger writable storage system via the epigenome, which naturally is also near thermodynamically optimal, typically using 1 ATP to read or write a bit or two reliably.
From "Information and the Single Cell":
Thus, the epigenome provides a very appreciable store of cellular information, on the order of 10 gigabytes per cell. It also operates over a vast range of time scales, with some processes changing on the order of minutes (e.g. receptor transcription) and others over the lifetime of the cell (irreversible cell fate decisions made during development). Finally, the processing costs are low: reading a 2-bit base-pair costs only 1 ATP.
Computation by wetware is vastly less expensive than cell signaling [11]; a 1-bit methylation event costs 1 ATP (though maintaining methylation also incurs some expense [63]).
According to estimates in "Science and Engineering Beyond Moore’s Law", an E Coli cell has a power dissipation rate of 1.4 e-13 W and takes 2400s for replication, which implies a thermodynamic limit of at most ~1e11 bits, which is close to their estimates of the cell's total information content:
This result is remarkably close to the experimental estimates of the informational content of bacterial cells based on microcalorimetric measurements which range from 1e11 to 1e13 bits per cell. In the following, it is assumed that 1 cell = 1e11 bit, i.e., the conservative estimate is used
A concrete setting in which to think about this would be the energy cost of an exonuclease severing a single base pair from a DNA molecule that was randomly synthesized and inserted into a test tube in a mixture of nucleotide and nucleoside monomers. The energy cost of severing the base pair, dissociating their hydrogen bond, and separating them irretrievably into the random mixture of identical monomers using thermal energy, would be the cost in energy of deleting 2 bits of information.
Unfortunately, I haven't been able to find the amount of ATP consumed ...
I may be assuming familiarity with the physics of computation and reversible computing.
Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.
The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).
An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, ...
No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.
The inverted vs. everted retina thing is interesting, and it makes sense that there are space-and-mass-saving advantages to putting neurons inside the eye, especially if your retinas are a noticeable fraction of your weight (hence the focus on "small, highly-visual species"). But it seems like for humans in particular having an everted retina would likely be better "The results from modelling nevertheless indicate clearly that the inverted retina offers a space-saving advantage that is large in small eyes and substantial even in relatively large eyes. The advantage also increases with increasingly complex retinal processing and thus increasing retinal thickness. [...] Only in large-eyed species, the scattering effect of the inverted retina may indeed pose a disadvantage and the everted retina of cephalopods may be superior, although it also has its problems." (Kroger and Biehlmaher 20019)
But anyhow, which way around my vs. octopuses' retinas are isn't that big a mistake either way - certainly not an order of magnitude.
To get that big of an obvious failure you might have to go to more extreme stuff like the laryngeal nerve of the giraffe. Or maybe scurvy in humans.
Overall, [shrug]. Evolution's really good at finding solutions but it's really path-dependent. I expect it to be better than human engineering in plenty of ways, but there are plenty of ways the actual global optimum is way too weird to be found by evolution.
No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.
That isn't directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.
Nuclear fusion may simply be impossible to realistically harness by a cell sized machine self assembled out of common elements.
The article I linked argues that the inverted retina is near optimal, if you continue reading . ..
The scattering effects are easily compensated for:
Looking out through a layer of neural tissue may seem to be a serious drawback for vertebrate vision. Yet, vertebrates include birds of prey with the most acute vision of any animal, and even in general, vertebrate visual acuity is typically limited by the physics of light, and not by retinal imperfections.
...So, in general, the apparent challenges with an inverted retina seem to have been practically abolish
I'm assuming you're using "global maximum" as a synonym for "pareto optimal," though I haven't heard it used in that sense before. There are plenty of papers arguing that one biological trait or another is pareto optimal. One such (very cool) paper, "Motile curved bacteria are Pareto-optimal," aggregates empirical data on bacterial shapes, simulates them, and uses the results of those simulations to show that the range of shapes represent tradeoffs for "efficient swimming, chemotaxis, and low cell construction cost."
It finds that most shapes are pretty efficient swimmers, but slightly elongated round shapes and curved rods are fastest, and long straight rods are notably slower. However, these long straight rod-shaped bacteria have the lowest chemotactic signal/noise ratio, because they can better resist being jostled around by random liquid motion. Finally, spherical shapes are probably easiest to construct, since you need special mechanical structures to hold rod and helical shapes. Finally, they show that all but two bacterial species they examined have body shapes that are on the pareto frontier.
If true, what would this "pareto optimality" principle mean generally?
Conservatively, it would indicate that we won't often find bad biological designs. If a design appears suboptimal, it suggests we need to look harder to identify the advantage it offers. Along this theme, we should be wary of side effects when we try to manipulate biological systems. These rules of thumb seem wise to me.
It's more of a stretch to go beyond caution about side effects and claim that we're likely to hit inescapable tradeoffs when we try to engineer living systems. Human goals diverge from maximizing reproductive fitness, we can set up artificial environments to encourage traits not adaptive in the wild, and we can apply interventions to biological systems that are extremely difficult, if not impossible, for evolution to construct.
Take the bacteria as an example. If this paper's conclusions are true, then elongated rods have the highest chemotactic SNR, but are difficult to construct. In the wild, that might matter a lot. But if we want to grow a f*ckload of elongated rod bacteria, we can build some huge bioreactors and do so. In general, we can deal with a pareto frontier by eliminating the bottleneck that locks us into the position of the frontier.
Likewise, the human body faces a tradeoff between being too vigilant for cancer (and provoking harmful autoimmune responses) and being too lax (and being prone to cancer). But we humans can engineer ever-more-sophisticated systems to detect and control cancer, using technologies that simply are not available to the body (perhaps in part for other pareto frontier reasons). We still face serious side effects when we administer chemo to a patient, but we can adjust not only the patient's position on the pareto frontier, but also the location of that frontier itself.
The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.
Basically, as far as I can tell, the answer is no, except with a bunch of qualifiers. Jacob Cannell has at least given some evidence that biology reliably finds pareto optimalish designs, but not global maximums.
In particular, his claims about biology never being improved by nanotech are subject to Extremal Goodhart.
For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.
Ultimate limits from reversible computing/quantum computers come here:
https://arxiv.org/abs/quant-ph/9908043
From Gwern:
No, it's not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like 'if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second' - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example - they are all, to put it technically, 'very big'.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn't we say "there's plenty of room at the top"? Even if there wasn't and you could 'only' go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can't do something in seconds and would instead take hours, how is that not an 'intelligence explosion' in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an 'intelligence gust of warm wind' if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your 'proof' are also the least important, which is the opposite of what you need, and serves only to dazzle and 'Eulerize' the innumerate.
- btw I lied; that multiplies to 3e75, not 3e65. Did you notice?
Landauer's limit only 'proves' that when you stack it on a pile of assumptions a mile high about how everything works, all of which are more questionable than it. It is about as reliable a proof as saying 'random task X is NP-hard, therefore, no x-risk from AI'; to paraphrase Russell, arguments from complexity or Landauer have all the advantages of theft over honest toil...
Links to comments here:
https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=PacDMbztz5spAk57d
https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=HH4xETDtJ7ZwvShtg
One important implication is that in practice, it doesn't matter whether biology has found a pareto optimal solution, since we can usually remove at least one constraint that applies to biology and evolution, even if it's as simple as editing many, many genes at once to completely redesign the body.
This also regulates my Foom probabilities. My view is that I hold a 1-3% chance that the first AI will foom by 2100. Contra Jacob Cannell, Foom is possible, if improbable. Inside the model, everything checks out, but outside the model, it's where he goes wrong.
For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.
Reversible/Quantum computing is not as general as irreversible computing. Those paradigms only accelerate specific types of computations, and they don't help at all with bit erasing/copying. The core function of a biological cell is to replicate, which requires copying/erasing bits, which reversible/quantum computing simply don't help with at all, and in fact just add enormous extra complexity.
If biology would find the maximum we would expect that different species find the same photosynthesis process and that we can't improve the photosynthesis process of one species by swapping it out with that of another.
https://www.science.org/content/article/fight-climate-change-biotech-firm-has-genetically-engineered-very-peppy-poplar suggests that you can make trees grow faster by adding pumpkin and green algae genes.
Without reading the link, that sounds like the exact opposite of the conclusion you should reach. Are they implanting specific genes, or many genes?
Unless by "gobal" you mean "local", I don't see why this statement would hold? Land animals never invented the wheel even where wheeling is more efficient than walking (like steppes and non-sandy deserts). Same with catalytic coal burning for energy (or other easily accessible energy-dense fossil fuel consumption). Both would give extreme advantages in speed and endurance. There are probably tons of other examples, like direct brain-to-brain communication by linking nerve cells instead of miming and vocalizing.
Is this all there is to Jacob’s comment? Does he cite sources? It’s hard to interrogate without context.
His full comment is this:
The paper you linked seems quite old and out of date. The modern view is that the inverted retina, if anything, is a superior design vs the everted retina, but the tradeoffs are complex.
This is all unfortunately caught up in some silly historical "evolution vs creationism" debate, where the inverted retina was key evidence for imperfect design and thus inefficiency of evolution. But we now know that evolution reliably finds pareto optimal designs:
biological cells operate close to the critical Landauer Limit, and thus are pareto-optimal practical nanobots. eyes operate at optical and quantum limits, down to single photon detection. the brain operates near various physical limits, and is probably also near pareto-optimal in its design space.
He cites one source on the inverted eye's superiority over the everted eye, link here:
https://scholar.google.com/scholar?cluster=3322114030491949344&hl=en&as_sdt=2005&sciodt=0,5
His full comment is linked here:
https://www.lesswrong.com/posts/GCRm9ysNYuGF9mhKd/?commentId=aGq36saoWgwposRHy
That's the context.
Jacob Cannell has claimed that biological systems at least get within 1 OOM of not a local, but global maximum in abilities.
His comment about biology nearing various limits are reproduced here:
Link to comment here:
https://www.lesswrong.com/posts/GCRm9ysNYuGF9mhKd/?commentId=aGq36saoWgwposRHy
I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?
And if this view was true, what would the implications be for technology?