Perplexed comments on Less Wrong: Open Thread, September 2010 - Less Wrong

3 Post author: matt 01 September 2010 01:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (610)

You are viewing a single comment's thread.

Comment author: Perplexed 03 September 2010 04:05:53PM *  1 point [-]

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions are:

  • Are these AIs born speaking English, Chinese, Arabic, Hindi, etc., or do they have to learn these languages?
  • If they learn these languages, do they have to pass some kind of language proficiency test before they are permitted to use them?
  • Are they born with any built in language capability or language learning capability at all?
  • Are the "objective functions" with which we seek to leash AIs expressed in some kind of language, or in something more like "object code"?
Comment author: timtyler 08 September 2010 04:09:54PM *  0 points [-]

The questions are about a future which hasn't been written yet. So: "it depends".

If you are asking what is most likely, my answers would be: machines will probably learn languages, yes there will be tests, prior knowledge-at-birth doesn't seem too important - since it can probably be picked up quickly enough - and "it depends":

Humans will probably tell machines what to do in a wide range of ways - including writing code and body language - but a fair bit of it will probably be through high-level languages - at least initially. Machines will probably tell humans what they want in a similar way - but with more use of animation and moving pictures.

Comment author: LucasSloan 08 September 2010 02:46:37PM 0 points [-]

There are possible general AI designs that have knowledge of human language when they are first run. What is this "permitted" you speak of? All true seed AIs have the ability to learn about human languages, as human language is subset of the reality they will attempt to model, although it is not certain that they would desire to learn human language (if, say, destructive nanotech allows them to eat us quickly enough that manipulation is useless). "Object code" is a language.

Comment author: Perplexed 08 September 2010 03:46:26PM 1 point [-]

I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

That is what I was getting at with my questions.

Comment author: LucasSloan 08 September 2010 08:31:00PM 2 points [-]

must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want.

It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.

Comment author: Perplexed 08 September 2010 08:39:18PM 2 points [-]

... so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions ...

Uhmm, how are the models generated/validated?