ata comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (202)
What?
Do you mean humanlike AIs? An AI capable of passing the Turing Test would of course need to understand human language well enough to act convincingly human (or at least do a really good imitation), but that's not necessarily a human-level AI (convincing people that you're human is a separate task from actually being human, probably a much easier one), and human-level AIs in general needn't necessarily understand human language any better than any other sort of language by default.
Anyway, an AI being "programmed in human languages" seems to be going by the "programming = instructions being given to a human servant" metaphor, and if you want that to work, you clearly first need to write the servant in something other than human language. And copying human psychology well enough that the AI actually understands human language as well as a human does, rather than being able to imitate understanding well enough to carry on a text-based conversation, is no easy task, and is probably a lot harder than manually coding a simple goal system like paperclip maximization in a lower-level language. But that could still be an AGI.
Human level AI - an AGI design capable of matching the full intellectual capabilities of the best human scientists/engineers.
To get to H level in a practical timeframe, a human AI will have to learn human knowledge, it will have to experience an equivalent to a standard 20-25 year education.
Learning human knowledge in practice requires learning human language as an early initial precursor step.
The software of a human mind - the memeset or belief network, is essentially a complex human language program.
For an AI to achieve human-level, it will have to actually understand human language as well as a human does, and this requires a bunch of algorithmic complexity from the human brain at the hardware level and it implies the capability to parse and run human language programs.
So you only need to program the infant brain in a programming language - the rest can be programmed in human language.
If it doesn't have the capacity to understand human level language then it's not an AGI - as that is the defining characteristic of the concept (by my/Turing's definition).
And thus by extension, the defining characteristic of a human-mind is human language capability.
EDIT: Why are you downvoting? Don't agree and don't want to comment?
Turing never intended his test to be adopted as "the defining characteristic of the concept [of AGI]" in anything like this fashion. Human 'level' language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved - especially the quirks and weaknesses.
I disagree somewhat. It's difficult to know exactly what "he intended", but the opening of his paper which introduces the concept, starts with "Can machines think?", and describes a reasonable language based test: an intelligent machine is one that can convince us of it's intelligence in plain human language.
I meant natural language, the understanding of which certainly does require a certain minimum level of cognitive capabilities.
We have a much greater understanding of what the "think" in "Can machines think?" means now. We have better tests than seeing if they can fake human language.
The test isn't about faking human language, it's about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.
What's a better test for AI than the turing test?
Give it a series of fairly difficult and broad ranging tasks, none of which it has been created with existing specialised knowledge to handle.
Yes - the AIQ idea.
But how do you describe the task and how does the AI learn about it? There's a massive gulf between AI's which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept - it wouldn't represent practical capacity to intellectually self-improve in our world.
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
If you have to describe every potential probelm to the AI in Prolog, how will it learn to become a computer scientist or quantum physicist?
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Possibly relevant: AIXI-style IQ tests.