SHRDLU was very impressive by any standards. It was released in the very early 1970s, when computers had only a few kilobytes of memory. Fortran was only about 15 years old. People had only just started to program. And then using paper tape.
SHRDLU took a number of preexisting ideas about language processing and planning and combines them beautifully. And SHRDLU really did understand its tiny world of logical blocks.
Given how much had been achieved in the decade prior to SHRDLU it was entirely reasonable to assume that real intelligence would be achieved in the relatively near future. Which is, of course, the point of the article.
(Winograd did cheat a bit by using Lisp. Today such a program would need to be written in C++ or possibly Java which takes much longer. Progress is not unidirectional.)
It stopped being all, about genes when genes grew brains..
Yes and no. In the sense that memes as well as genes float about then certainly. But we have strong instincts to raise and protect children, and we have brains. There is not particular reason why we should sacrifice ourselves for our children other than those instincts, which are in our genes.
Makes sense.
It is absolutely the fact that gene drift is more common than mutation. Indeed, a major reason for sexual reproduction is to provide alternate genes that can mask other genes broken by mutations.
An AGI would be made up of components in some sense, and those components could be swapped in and out to some extent. If a new theorem prover is created an AGI may or may not decide to use it. That is similar to gene swapping, but done consciously.
One thing that I would like to see is + and - separated out. If the article received -12 and +0 then it is a looser. But if it received -30 and + 18 then it is merely controversial.
Indeed, and that is perhaps the most important point. Is it really possible to have just one monolithic AGI? Or would by its nature end up with multiple, slightly different AGIs? The latter would be necessary for natural selection.
As to whether spawned AGIs are "children", that is a good question.
Natural selection does not cause variation. It just selects which varieties will survive. Things like sexual selection are just special cases of natural selection.
The trouble with the concept of natural selection is not that it is too narrow, but rather that it is too broad. It can explain just about anything, real or imagined. Modern research has greatly refined the idea, determined how NS works in practice. But never to refute it.
I've never understood how one can have "moral facts" that cannot be observed scientifically. But it does not matter, I am not being normative, but merely descriptive. If moral values did not ultimatey arise from natural selections, where did they arise from?
Passive in the sense of not being able to actively produce offspring that are like the parents. The "being like" is the genes. Volcanoes do not produce volcanoes in the sense that worms produce baby worms.
For an AI that means its ability to run on hardware. And to pass its intelligence down to future versions of itself. A little vaguer, but still the same idea.
This is just the idea of evolution through natural selection, a rather widely held idea.
I hate the term "Neural Network", as do many serious people working in the field.
There are Perceptrons which were inspired by neurons but are quite different. There are other related techniques that optimize in various ways. There are real neurons which are very complex and rather arbitrary. And then there is the greatly simplified Integrate and Fire (IF) abstraction of a neuron, often with Hebbian learning added.
Perceptrons solve practical problems, but are not the answer to everything as some would have you believe. There are new and powerful kernal methods that can automatically condition data which extend perceptrons. There are many other algorithms such as learning hidden Markov models. IF neurons are used to try and understand brain functionality, but are not useful for solving real problems (far too computationally expensive for what they do).
Which one of these quite different technologies is being referred to as "Neural Network"?
The idea of wiring perceptrons back onto themselves with state is old. Perceptrons have been shown to be able to emulate just about any function, so yes, they would be Turing complete. Being able to learn meanginful weights for such "recurrent" networks is relatively recent (1990s?).