Perplexed comments on Less Wrong: Open Thread, September 2010 - Less Wrong

3 Post author: matt 01 September 2010 01:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (610)

You are viewing a single comment's thread. Show more comments above.

Comment author: LucasSloan 08 September 2010 02:46:37PM 0 points [-]

There are possible general AI designs that have knowledge of human language when they are first run. What is this "permitted" you speak of? All true seed AIs have the ability to learn about human languages, as human language is subset of the reality they will attempt to model, although it is not certain that they would desire to learn human language (if, say, destructive nanotech allows them to eat us quickly enough that manipulation is useless). "Object code" is a language.

Comment author: Perplexed 08 September 2010 03:46:26PM 1 point [-]

I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

That is what I was getting at with my questions.

Comment author: LucasSloan 08 September 2010 08:31:00PM 2 points [-]

must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want.

It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.

Comment author: Perplexed 08 September 2010 08:39:18PM 2 points [-]

... so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions ...

Uhmm, how are the models generated/validated?