Thanks for the reply. It is very helpful.
I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC - hopefully Berkeley or UCSD - until this fall). Unfortunately, most Community and Junior Colleges don't teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering - or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).
So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?
Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.
I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).
In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).
An uplifting message as we enter the new year, quoted from Edge.org:
A few thoughts: when considering the heavy skepticism that the singularity hypothesis receives, it is important to remember that there is a much weaker hypothesis, highlighted here by Tegmark, that still has extremely counter-intuitive implications about our place in spacetime; one might call it the bottleneck hypothesis - the hypothesis that 21st century humanity occupies a pivotal place in the evolution of the universe, simply because we may well be a part of the small space/time window during which it is decided whether earth-originating life will colonize the universe or not.
The bottleneck hypothesis is weaker than the singularity hypothesis - we can be at the bottleneck even if smarter-than-human AI is impossible or extremely impractical, but if smarter-than-human AI is possible and reasonably practical, then we are surely at the bottleneck of the universe. The bottleneck hypothesis is based upon less controversial science than the singularity hypothesis, and is robust to different assumptions about what is feasible in an engineering sense (AI/no AI, ems/no ems, nuclear rockets/generation ships/cryonics advances, etc) so might be accepted by a larger number of people.
Related is Hanson's "Dream Time" idea.