You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Learning values versus learning knowledge - Less Wrong Discussion

5 Post author: Stuart_Armstrong 14 September 2016 01:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 19 September 2016 04:58:19PM *  0 points [-]

They are entitled to assume they could be applied, not necessarily that they would be. At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]". This gap may be easy to bridge, or hard; no-one's suggested any way of bridging it so far.

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable. However, your side of the debate has never shown that.

At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]".

No...you don't have to show a fan how to make a whirring sound... use of updatable knowledge to specify goals is a natural consequence of some designs.

It might be possible; it might be trivial.

You are assuming it is difficult, with little evidence.

But there's no evidence in that direction so far, and the designs that people have actually proposed have been disastrous.

Designs that bridge a gap, or designs that intrinsically don't have one?

I'll work at bridging this gap, and see if I can solve it to some level of approximation.

Why not examine the assumption that there has to be a gap?

Comment author: Stuart_Armstrong 19 September 2016 06:03:23PM 1 point [-]

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable.

? Of course there's a gap. The AI doesn't start with full NL understanding. So we have to write the AI's goals before the AI understands what the symbols mean.

Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can't do that initial programming using NL, of course.

Comment author: TheAncientGeek 22 September 2016 05:03:02PM 0 points [-]

Of course there's a gap. The AI doesn't start with full NL understanding.

Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.

So we have to write the AI's goals before the AI understands what the symbols mean.

Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?

Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions

Y-e-es? Why assume "it needs to follow instructions" equates to "it would simplify the instructions it's following" rather than something else?