Thinking aloud:

Humans are examples of general intelligence - the only example we're sure of. Some humans have various degrees of autism (low level versions are quite common in the circles I've moved in), impairing their social skills. Mild autists nevertheless remain general intelligences, capable of demonstrating strong cross domain optimisation. Psychology is full of other examples of mental pathologies that impair certain skills, but nevertheless leave their sufferers as full fledged general intelligences. This general intelligence is not enough, however, to solve their impairments.

Watson triumphed on Jeopardy. AI scientists in previous decades would have concluded that to do so, a general intelligence would have been needed. But that was not the case at all - Watson is blatantly not a general intelligence. Big data and clever algorithms were all that were needed. Computers are demonstrating more and more skills, besting humans in more and more domains - but still no sign of general intelligence. I've recently developed the suspicion that the Turing test (comparing AI with a standard human) could get passed by a narrow AI finely tuned to that task.

The general thread is that the link between narrow skills and general intelligence may not be as clear as we sometimes think. It may be that narrow skills are sufficiently diverse and unique that a mid-level general intelligence may not be able to develop them to a large extent. Or, put another way, an above-human social intelligence may not be able to control a robot body or do decent image recognition. A super-intelligence likely could: ultimately, general intelligence includes the specific skills. But his "ultimately" may take a long time to come.

So the questions I'm wondering about are:

  1. How likely is it that a general intelligence, above human in some domain not related to AI development, will acquire high level skills in unrelated areas?
  2. By building high-performance narrow AIs, are we making it much easier for such an intelligence to develop such skills, by co-opting or copying these programs?

 

New Comment
22 comments, sorted by Click to highlight new comments since:

Are self-training narrow AIs even a going concern yet? DeepQA can update its knowledge base in situ, but must be instructed to do so. Extracting syntactic and semantic information from a corpus is the easy part; figuring out what that corpus should include is still an open problem, requiring significant human curation. I don't think anyone's solved the problem of how an AI should evaluate whether to update its knowledge base with a new piece of information or not. In the Watson case, an iterative process would be something like "add new information -> re-evaluate on gold standard question set -> decide whether to keep new information", but Watson's fitness function is tied to that question set. It's not clear to me how an AI with a domain-specific fitness function would acquire any knowledge unrelated to improving the accuracy of its fitness function -- though that says more about the fitness functions that humans have come up with so far than it does about AGI.

It's certainly the case that an above-human general intelligence could copy the algorithms and models behind a narrow AI, but then, it could just as easily copy the algorithms and models that we use to target missiles. I don't think the question "is targeting software narrow AI" is a useful one; targeting software is a tool, just as (e.g.) pharmaceutical candidate structure generation software is a tool, and an AGI that can recognize the utility of a tool should be expected to use it if its fitness function selects a course of action that includes that tool. Recognition of utility is still the hard part.

[-][anonymous]00

Are self-training narrow AIs even a going concern yet?

Is what Google does for search results based in part on what you do and don't do considered self training?

What I mean is that two people don't see the exact same Google results for some queries if we were both signed into Google, and in some cases even if we both aren't. Article: http://themetaq.com/articles/reasons-your-google-search-results-are-different-than-mine

An entirely separate question is whether or not Google is a narrow AI, but I figured I should check one thing at a time.

I wouldn't call Google's search personalization "self-training" because the user is responsible for adding new data points to his or her own model; it's the same online algorithm it's always been, just tailored to billions of individual users rather than a set of billions of users. The set of links that a user has clicked on through Google searches is updated every time the user clicks a new link, and the algorithm uses this to tweak the ordering of presented search results, but AFAIK the algorithm has no way to evaluate whether the model update actually brought the ordering closer to the user's preferred ordering unless the user tells it so by clicking on one of the results. It could compare the ordering it did present to the ordering it would have presented if some set of data points wasn't in the model, but then it would have to have some heuristic for which points to drop for cross-validation.

The way I see it, the difference between an online algorithm and a self-training AI is that the latter would not only need such a heuristic -- let's call it "knowledge base evaluation" -- it would also need to be able to evaluate the fitness of novel knowledge base evaluation heuristics. (I'm torn as to whether that goalpost should also include "can generate novel KBE heuristics"; I'll have to think about that a while longer.) Even so, as long as the user dictates which points the algorithm can even consider adding to its KB, the user is acting as a gatekeeper on what knowledge the algorithm can acquire.

The way I see it, the difference between an online algorithm and a self-training AI is that the latter would not only need such a heuristic -- let's call it "knowledge base evaluation" -- it would also need to be able to evaluate the fitness of novel knowledge base evaluation heuristics.

On reflection, I'm now contradicting my original statement; the above is a stab toward an algorithmic notion of "self-training" that is orthogonal to how restricted an algorithm's training input set is, or who is restricting it, or how. Using this half-formed notion, I observe that Google's ranking algorithm is AFAIK not self-training, and is also subject to a severely restricted input set. I apologize for any confusion.

Question #1 isn't assessable.

Question 2 is an absolute yes. An AI comes pre-packged with a brain-computer interface; the computer is the brain. A human operator needs to eat, sleep, and read the system log. An AI can pipe the output of any process directly into its brain. The difficulty is making the connection from its abstract thinking process to the specialized modules; what we would call "practice." This is all before it learns to rewrite its own brain source code directly.

I've recently developed the suspicion that the Turing test (comparing AI with a standard human) could get passed by a narrow AI finely tuned to that task.

This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test. That is, there must be some hidden property that fulfills the following criteria:

  1. Human minds possess it, narrow-focus AIs do not.
  2. The property produces observable effects, and its existence could be inferred from observing these effects with a high degree of sensitivity and specificity.
  3. The Turing Test does not already take these observable effects into account.

So, a). what is this property whose existence you are proposing, and b). how would we test for its presence and absence ?

The most popular answers are "consciousness" and "I don't know", but I find these unsatisfactory. Firstly, no one seems to have a definition of "consciousness" that isn't circular (i.e., "you know, it's that thing that humans have but AIs don't") or a priori unfalsifiable ("it's your immortal soul !"). Secondly, if you can't test for the presence or absence of a thing, then you might as well ignore it, since, as far as you know, it doesn't actually do anything.

The slightly less popular answers to (b) are all along the lines of, "let's make the agent perform some specific creative task that some humans are good at, such as composing a poem, painting a picture, dancing a tango, etc.". Unfortunately, such tests would produce too many false negatives. I personally cannot do any of the things I listed above, and yet I'm pretty sure I'm human. Or am I ? How would you know ?

This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test.

Maybe the AI lacks the ability to learn any skills in a non-linguistic way - it could never recognise videos, merely linguistic descriptions of this. Maybe it's incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).

I'd say a general AI should be tested using some test that it wasn't optimised for/trained on.

Maybe the AI lacks the ability to learn any skills in a non-linguistic way - it could never recognise videos, merely linguistic descriptions of this. Maybe it's incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).

Once again, these tests provide too many false negatives. Blind people cannot recognize videos, either (though, oddly enough, existing computer vision systems can); even sighted people can have trouble telling what's going on, if the video is in a foreign language and depicts a foreign culture. And few people are capable of managing humans; I know I personally can't do it, for example.

I'd say a general AI should be tested using some test that it wasn't optimised for/trained on.

How would you know, ahead of time, what functions the AI was optimized to handle, or even whether you were talking to an AI in the first place ? If you knew the answer to that, you wouldn't need any tests, Turing or otherwise; you'd already have the answer.

In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human, unless you could see them in person and verify that they were made of meat just like you and me. Well, actually, not me. You only have my word for it that I'm human, and you've never seen me watching a cat video, so I could very well be an AI.

In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human

It holds AIs to a higher standard, yes. But one of the points of the Turing test was not that any intelligent computer could pass it, but that any computer who passed it was intelligent.

[-]Shmi30

Some psychopaths can mimic empathy without having it. Many more people can fake a given morality while having none. Does this count as acquiring a skill?

Pretty much yes.

[-]Shmi20

Does this count as "acquiring high level skills in unrelated areas"?

Yes, certainly! Many types of brains with certain defects can use other skills to overcome them. The odd thing is that doesn't always work, even in brains with high general intelligence.

Shouldn't the latter part imply that the intelligence isn't that general?

[-]dxu00

Yes, certainly! Many types of brains with certain defects can use other skills to overcome them. The odd thing is that doesn't always work, even in brains with high general intelligence.

Shouldn't the latter part imply that the intelligence isn't that general?

Alternatively, it could imply that the intelligence isn't that high.

My point is that general intelligence (especially at the lower level) may not be a very efficient substitute for narrow intelligences in many domains.

It seems that human intelligence is dominated by modules that run unconsciously and subconsciously. These modules deliver pretty highly processed results to the consciousness, which naively thinks that it did all that work! The evidence for this is all the brain injury and surgery work that shows a long list of skills which are impacted by narrow injuries to the brain. Most spectacular in my mind are the aphasias. One can separately lose the ability to speak coherently and/or to interpret spoken language. Spoken language defects can be oddly narrow things like screwing up word order, losing large tranches of vocabulary, or losing numbers. Aphasia is most spectacular to me because 1) I have witnessed it in my mother, who within an hour of regaining consciousness after a stroke turned to me in frustration and announced "I'm aphasic" and 2) because the naive version of me couldn't imagine the ability to create sentences could be separate from "what makes me me," when what I saw with my mother clearly suggests to me that it can be. And indeed when I attempt to introspect while talking I can't identify where the words are coming from (that is, I conclude they are coming from machinery of which I am not conscious). And when I attempt to introspect while listening, I cannot even identify the moment when hearing the spoken word that its meaning becomes clear to me: my conscious sensation is that the sound comprising the word and my knowing which word it is are both presented to my consciousness simultaneously.

Having said all that, and having read Dennett, I would expect an AI which could pass a reasonably high-level Turing test would have all sorts of modules working unconsciously and whatever organizing routine it used for consciousness would be a relatively minor part of trying to pass as human.

I expect AI will have the ability to create new functional modules and add them to its "brain." This may be somewhat analogous to a human pushing a repeated action down in to the cerebrum where it is then much more automatic and unconscious then it is while still new. Perhaps there will be ways when building an AI to limit what can be automated in this way, but presumably those limits would not hold up well to the AI self-modifying, if it gets to that point. Self-modification may not be that simple, humanity is just barely getting the hang of it in a very limited way around now.

The modularity of the mind amazed me the most when I first read about the function of the visual cortex. To think that there are neurons that represent lines of different angles and lengths in your field of vision and similar lines that are moving in different directions etc. and that you somehow get a complete visual experience is mind-boggling, even if obvious afterwards. You understanding this text is those individual cell thingies firing in a meticulously connected harmony.

Humans are examples of general intelligence

I think of humans as a conglomeration of various narrow modules. Quite often, our skills in one domain to not transfer to other domains, even when those domains are isomorphic - see wason selection task. Many clever algorithms, both for doing tasks and for deciding which algorithm to use. Autism and many other illnesses are the result of some modules being damaged while others are left intact.

If humans are part of what we define as general intelligence, then that's at least one path to it;

That's a good point, actually... are we (the humans) examples of general intelligence ? I personally know that there are many problem domains which I am unable to even pronounce, let alone comprehend; and yet there are many other humans who are happily solving problems in these domains even as we speak. What does "general intelligence" mean, anyway ?

[-][anonymous]00

AI scientists in previous decades would have concluded that to do so, a general intelligence would have been needed But that was not the case at all - Watson is blatantly not a general intelligence. Big data and clever algorithms were all that were needed

Hindsight is a wonderful thing - at the time, it was probably completely reasonable to imagine that a general AI would be the only way to solve that problem. As Arthur C Clarke said: “Any sufficiently advanced technology is indistinguishable from magic.”

This is perhaps a little extreme in this instance, but as techniques and narrow AI solvers move into more areas and complete a large number of specific tasks, the ‘General AI’ line will become a moving target as it has in the past.

A robot can pick up a ball and throw it to Fred who is wearing a red jacket - 20 years ago that would have been truly amazing (and it still is), but as you say - it is simply a bunch of algorithms and data.

Possibly the only way to measure General AI (at least among AI researchers) would be to give it NO data, and let it go from there.

I'm not sure that's really fair; humans start with various predispositions which probably amount to some meaningful data by birth (I don't think it's much of a stretch to posit this, plenty of other animals which are born more developed certainly seem to start with a significant amount of hard coded data, so it seems reasonable to suppose that humans would have some,) and we count humans as general intelligences.