Eliezer_Yudkowsky comments on [LINK]s: Who says Watson is only a narrow AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
I say Watson is only a narrow AI.
:)
But isn't it getting too wide too quickly?
Anyway, I am guessing that by your definition the difference between a narrow and a general AI is not the number of problem solving or reasoning tasks where it is as good as or better than humans, even if it's the vast majority of these tasks, but having a "general, flexible learning ability that would let them tackle entirely new domains", i.e. being vastly better than an average single human being, who generally sucks at adapting to "new domains".
What, four?
Sigh. Charitable reading is not a strong suit of this place. Not four. How about 100? 1000? Would that be enough?
Sorry, the silliness too tempting. But however you want to count the number of things that go into what Watson does, it really is a small portion of things humans can do.
What exponent is on the number of things that humans can do, generalized to the degree of "drive cars"?
Hm, tough question. One way to get a quick lower bound might be "what's something that uses the same general skills as driving cars, but is very different, and in how many ways is it different?" So if we use the same spatial skills to do knitting, and we say it's different from driving in a car in about 10 ways (where what we consider a "way" sets our scale), then there are at least 2^10 things that use the same skills as, but are different from, knitting and driving cars (among other bad assumptions, this assumes that two things being alike in some way is binary and transitive). If there are 10 domains (everything is approximately 10) like "spatial skills and spatial planning, basic motor coordination," then the lower bound would be more like 2^100.
If all of the things that humans can do are required for knitting and driving cars, than there are two things that humans can do, generalized to that level. If an AI could learn the hard way to drive and to knit, it would be able to do everything a human could do. I estimate that controlling vehicles is about four different skills by that definition (road vehicles, fixed-wing, rotary-wing, and reaction-mass spacecraft), but knitting, crocheting, and sewing are the same skill, and there are probably only two or three different skills that cover all of athletics (an AI that could learn to play football would probably be able to learn curling, but it might not be able to learn gymnastics or swiming)
I think that our existing AIs haven't demonstrated that they can learn to do anything the hard way. I could be wrong, because I don't have any deep insight into if existing AIs learn or are created with full knowledge.
Well that's settled then.
(Incidentally, I think everyone agrees it's narrow now, but it does make "joining up" narrow AI sound more plausible than before.)