Are these types of AI far too primitive to even be considered sub-human, in your opinion?
Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type
classify :: Corpus -> (a -> b)
Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).
Corpus -> (ChessGameState -> ChessMove) or a data mining tool set up for finding terrorists
Corpus -> ((Passport, FlightItinerary) -> Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.
They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don't have a well defined input and output over their life times.
Consider the humble PC it doesn't know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.
There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.
*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don't search the whole space for computational resource reasons.
Thanks for the reply. It is very helpful.
I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC - hopefully Berkeley or UCSD - until this fall). Unfortunately, most Community and Junior Colleges don't teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering - or, the shorter name: Artifi...
An uplifting message as we enter the new year, quoted from Edge.org:
A few thoughts: when considering the heavy skepticism that the singularity hypothesis receives, it is important to remember that there is a much weaker hypothesis, highlighted here by Tegmark, that still has extremely counter-intuitive implications about our place in spacetime; one might call it the bottleneck hypothesis - the hypothesis that 21st century humanity occupies a pivotal place in the evolution of the universe, simply because we may well be a part of the small space/time window during which it is decided whether earth-originating life will colonize the universe or not.
The bottleneck hypothesis is weaker than the singularity hypothesis - we can be at the bottleneck even if smarter-than-human AI is impossible or extremely impractical, but if smarter-than-human AI is possible and reasonably practical, then we are surely at the bottleneck of the universe. The bottleneck hypothesis is based upon less controversial science than the singularity hypothesis, and is robust to different assumptions about what is feasible in an engineering sense (AI/no AI, ems/no ems, nuclear rockets/generation ships/cryonics advances, etc) so might be accepted by a larger number of people.
Related is Hanson's "Dream Time" idea.