Another consideration is generation length. Even talking about hardware replacement, a recursively improving AI should be able to build a new generation on the order of weeks or months. Humans take a minimum of twelve years, and in practice quite a bit more than that most of the time. Even if we end up on the curve first, the different constant factor may dominate.
Seems to me that this point is by itself conclusive reason to think that there won't be a fast takeoff in biological intelligence. There may be a single great leap, if we figure out how to make dramatically smarter humans all at once without any trial and error, but there won't be a "fast takeoff" of recursive self improvement, since that would take at minimum of 12 years for each iteration, and that's not fast.
That said, I agree that we should consider promoting gene editing. Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won't be any less ethical than normal children). So the sooner we get started, the better. But I doubt it will be fast enough.
I agree human maturation time is enough on its own to rule out a human reproductive biotech 'fast takeoff,' but also:
All of those factors would smooth out any such application to spread out expected impacts over a number of decades, on top of the minimum from maturation times.
I'm not following the logic here. I presume that "fast takeoff" is supposed to mean that someone with increased intelligence from the first improvement is then able to think of a second improvement that would have been beyond what earlier people could have thought of, and so forth for additional improvements. The relevant time interval here is from birth to thinking better than the previous generation, which need have nothing to do with the interval from birth to reproductive maturity (an interval which is not immutable anyway). The person who thinks of the new improvement doesn't have to be one of those who gestate the next generation.
I wasn't thinking of reproductive maturity, I was thinking of it in the same way as you. We make some gengineered people, who grow up and become smart, and then they figure out how to make the next generation, etc. Well, how long does it take to grow up and become smart? 12 years seems like an optimistic estimate to me.
Or are you thinking that we could use CRISPR to edit the genes of adult humans in ways that make them smarter within months? Whoa, that blows my mind. Seems very unlikely to me, for several reasons; is it a real thing? Do people think that's possible?
No, I wasn't thinking of modification of adult somatic genes. I was thinking of reproductive maturity taking 12 years, which you're right is also about how long it takes to reach adult levels of cognition (though not knowledge, obviously). The coincidence here leads to the ambiguity in what you said. Actually, I doubt this is a coincidence - it makes biological sense for these two to go together. Neither would be immutable if you're making profound changes to the genome, although if anything, it might be necessary to prolong the period of immaturity in order to get higher intelligence.
Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won't be any less ethical than normal children).
Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.
I shouldn't have said "basically not a problem at all." I should have said "not much more of a problem than the problem we already face with our own children." I agree that selecting for intelligence might have side-effects on other parameters. But seems to me those side-effects will likely be small and perhaps even net-positive (it's not like Einstein, von Neumann, etc. were psychopaths. They seemed pretty normal, as far as values were concerned.) Certainly we should be much more optimistic about the alignment-by-default of engineered humans than the alignment of some massive artificial neural net.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don't get sub-agent alignment for free, whether it's made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
OK, yeah, fair enough. Still though, the danger seems less than it is in the machine intelligence case.
The major difference between biological and artificial takeoff is that with biological takeoff, the superintelligent agents would be humans, with human-ish values. They may not be entirely prosocial, but they don't need to be to keep from turning the world into paperclips. If they're anything like modern high profile supergeniuses they will just use their talents to make major scientific or industrial advances.
A takeoff in biological intelligence could plausibly take us well beyond modern high profile supergeniuses, and we have no idea what other parts of our behaviour that would affect. To quote Maxim’s comment above: “Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.“
I don't see why you mightn't as well say that about any sufficiently advanced "aligned" artificial intelligence.
Another objection is that improvements in biological intelligence will tend to feed into improvements in artificial intelligence. For example, maybe after a couple of generations of biological improvement, the modified humans will be able to design an AI that quickly FOOMs and overtakes the slow generation by generation biological progress.
(It seems likely that once you've picked the low hanging fruit like stuffing people's genomes as full of intelligence-linked genes as possible without giving them genetic diseases, it will be much easier to implement any new intelligence improvements you can think of in code, rather than in proteins. The human brain is a much more sophisticated starting point than any current AI programs, but is probably much harder to modify significantly.)
>(It seems likely that once you've picked the low hanging fruit like stuffing people's genomes as full of intelligence-linked genes as possible without giving them genetic diseases, it will be much easier to implement any new intelligence improvements you can think of in code, rather than in proteins. The human brain is a much more sophisticated starting point than any current AI programs, but is probably much harder to modify significantly.)
A lot of people make claims like this, and I think they're underestimating the flexibility of living things. Dogs and livestock have been artificially selected to emphasize unnatural traits to the point that they might not appear in a trillion wolves or boars, yet people consistently predict that the limit for human intelligence without debilitating disorders is going to be just above whoever is the Terence Tao of the era.
Dogs and livestock have been artificially selected to emphasize unnatural traits to the point that they might not appear in a trillion wolves or boars
I think you're overestimating biology. Living things are not flexible enough to accommodate for GHz clock speed or lightspeed signal transmission despite having had evolution tinkering on it for billions of years. One in a trillion is just 40 bits, not all that impressive, not to mention dogs and livestock took millennia of selective breeding; that's not fast in our modern context.
Upon reflection, I think it's pretty obvious that superhumans are going to find it easier to solve the problem of intelligence and then code it out, rather than continue to increase the intelligence of their children through genetic modification. But note that I'm not talking about "selective breeding" here; I'm looking more at something like iterated embryo selection, where many many generations are bred in a short timespan, and then after that possibly direct genetic engineering. Many more than just 40 bits of selection can be applied in a few generations by our descendants than we can by straightforward embryo selection.
I was assuming they had fast and accurate DNA printers. You have a more limited ability to brute force test things than evolution. (How many babies with mental disorders can you create before the project gets cancelled?)
Consider starting with any large modern software project, like open office. Suppose I wanted a piece of software like open office, except with a few changes of wording on the menu. I find the spot and change it. Suppose I want a game of chess. I am writing an entirely new program. In the first case, I will use the same programming language, in the second I might not.
The reason for this dynamic is that
1) The amount of effort is proportional to the amount of code changed (In a fixed language)
2) Some languages are easier than others, given your skillset.
3) Interaction penalties are substantial.
Now think about genetics as another programming language. One in which we have access to a variety of different programs.
1) and 3) hold. If genetics is a programming language, it's not a nice one. Think about how hard it would be to do arithmetic in a biological system, compared to just about any programming language. How hard would it be to genetically modify a fruit fly brain so that its nerves took in two numbers, and added them together. Given current tech, I think this would take a major research project at least.
If you want a small tweak on human, that isn't too hard to do in genes. If you want to radically change things, it would be easier to use computer code, not from difficulty getting the gene sequence you want, but the difficulty knowing which sequences work.
What is your estimate of the probability of ubiquitous genetic engineering for cognitive enhancement prior to 2045-50 (one estimate for when Singularity might happen)?
Low, but for regulatory/legal reasons, not technological ones. And this is so far outside my wheelhouse that this prediction is itself very low confidence.
[Speculative and not my area of expertise; probably wrong. Cross-posted from Grand, Unified, Crazy.]
One of the possible risks of artificial intelligence is the idea of “fast” (exponential) takeoff – that once an AI becomes even just a tiny bit smarter than humans, it will be able to recursively self-improve along an exponential curve and we’ll never be able to catch up with it, making it effectively a god in comparison to us poor humans. While human intelligence is improving over time (via natural selection and perhaps whatever causes the Flynn effect) it does so much, much more slowly and in a way that doesn’t seem to be accelerating exponentially.
But maybe gene editing changes that.
Gene editing seems about as close as a biological organism can get to recursively editing its own source code, and with recent advances (CRISPR, etc) we are plausibly much closer to functional genetic manipulation than we are to human-level AI. If this is true, humans could reach fast takeoff in our own biological intelligence well before we build an AI capable of the same thing. In this world we’re probably safe from existential AI risk; if we’re both on the same curve, it only matters who gets started first.
There are a bunch of obvious objections and weaknesses in this analogy which are worth talking through at a high level:
This seems like a reasonable objection, though I do have two counterpoints. The first is that, in humans at least, intelligence seems pretty closely linked to hardware. Software also seems important, but hardware puts strong upper bounds on what is possible. The second counterpoint is that our inability to effectively edit our software source code is, in some sense, a hardware problem; if we could genetically build a better human, capable of more direct meta-cognitive editing… I don’t even know what that would look like.
Given all these objections I think it’s fairly unlikely that we reach a useful biological intelligence take-off anytime soon. However if we actually are close, then the most effective spending on AI safety may not be on AI research at all – it could be on genetics and neuroscience.