Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

whpearson comments on Max Tegmark on our place in history: "We're Not Insignificant After All" - Less Wrong

18 [deleted] 04 January 2010 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread. Show more comments above.

Comment author: MatthewB 04 January 2010 08:54:46PM 0 points [-]

Could you define sub-human AI, please?

It seems to me that we already have all manner of sub-human AI. The AIs that deal with telephone traffic, data mining, air-traffic control, the Gov't and Intelligence services, the Military, Universities that have AI programs, Zoos that have breeding programs (and sequence the genomes of endangered animals to find the best mate for the animal), etc.

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Comment author: whpearson 04 January 2010 11:11:30PM *  0 points [-]

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type

classify :: Corpus -> (a -> b)

Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).

Corpus -> (ChessGameState -> ChessMove) or a data mining tool set up for finding terrorists

Corpus -> ((Passport, FlightItinerary) -> Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.

They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don't have a well defined input and output over their life times.

Consider the humble PC it doesn't know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.

There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.

*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don't search the whole space for computational resource reasons.

Comment author: MatthewB 05 January 2010 01:43:22AM 2 points [-]

Thanks for the reply. It is very helpful.

I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC - hopefully Berkeley or UCSD - until this fall). Unfortunately, most Community and Junior Colleges don't teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering - or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).

So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?

Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.

I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).

In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).