Perplexed comments on Less Wrong: Open Thread, September 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (610)
Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.
Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions are:
The questions are about a future which hasn't been written yet. So: "it depends".
If you are asking what is most likely, my answers would be: machines will probably learn languages, yes there will be tests, prior knowledge-at-birth doesn't seem too important - since it can probably be picked up quickly enough - and "it depends":
Humans will probably tell machines what to do in a wide range of ways - including writing code and body language - but a fair bit of it will probably be through high-level languages - at least initially. Machines will probably tell humans what they want in a similar way - but with more use of animation and moving pictures.
There are possible general AI designs that have knowledge of human language when they are first run. What is this "permitted" you speak of? All true seed AIs have the ability to learn about human languages, as human language is subset of the reality they will attempt to model, although it is not certain that they would desire to learn human language (if, say, destructive nanotech allows them to eat us quickly enough that manipulation is useless). "Object code" is a language.
I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.
Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.
That is what I was getting at with my questions.
Nope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want.
Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.
Uhmm, how are the models generated/validated?