Houshalter comments on Open thread, Oct. 10 - Oct. 16, 2016 - Less Wrong

3 Post author: MrMind 10 October 2016 07:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: Lumifer 10 October 2016 02:48:06PM *  -2 points [-]

Nothing, because we still don't know what a friendly AI is.

Comment author: Houshalter 10 October 2016 08:15:41PM 1 point [-]

Friendly AI is an AI which maximizes human values. We know what it is, we just don't know how to build one. Yet, anyway.

Comment author: Lumifer 11 October 2016 06:38:33PM 2 points [-]

We don't know what an AI which maximizes human values is because we don't know what human values are at the necessary level of precision. Not to mention the assumption that the AI will be a maximizer and that values can be maximized.

Comment author: Houshalter 12 October 2016 07:34:44AM 1 point [-]

Who says we need to hardcode human values though? Any reasonable solution will involve an AI that learns what human values are. Or some other method to the control problem that makes AIs that don't want to harm or defy their creators.

Comment author: Lumifer 12 October 2016 04:35:05PM 1 point [-]

But if you don't know what human values are, how can you be sure that the AI will learn them correctly?

So you make an AI and tell it: "Go forth and learn human values!" It goes and in a while comes back and says "Behold, I have learned them". How do you know this is true?

Comment author: Houshalter 13 October 2016 04:13:14AM 0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly. I can't look at the weights and see if they are correct dog image recognizing weights and not something else. But I can trust the process of training and validation, that the AI has learned to recognize what dogs look like.

It's a similar principle with learning human values. Of course it's more complicated than just feeding it images of dogs, but the principle of letting AIs learn models from real world data is the important part.

Comment author: Lumifer 13 October 2016 02:22:08PM *  0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly.

Of course you do. You test it. You show it a lot of images (that it hasn't seen before) of dogs and not-dogs and check how good it is at differentiating them.

How would that process work for an AI and human values?

the principle of letting AIs learn models from real world data

Right, human values: “A man's greatest pleasure is to defeat his enemies, to drive them before him, to take from them that which they possessed, to see those whom they cherished in tears, to ride their horses, and to hold their wives and daughters in his arms.”

Comment author: Houshalter 14 October 2016 06:23:52AM 0 points [-]

Do you expect me to give you the complete solution to AI right here, right now? What are you even trying to say? You seem to be arguing that FAI is impossible. How can you possibly know that? Just because you can't immediately see a solution to the problem, doesn't mean a solution doesn't exist.

I think an AI will easily be able to learn human values from observations. It will be able to build a model of humans, and predict what we will do and say. It certainly won't base all it's understanding on a stupid movie quote. The AI will know what you want.

Comment author: Lumifer 14 October 2016 02:26:41PM 1 point [-]

What are you even trying to say?

I'm saying that if you can't recognize Friendliness (and I don't think you can), trying to build a FAI is pointless as you will not be able to answer "Is it Friendly?" even when looking at it.

I think an AI will easily be able to learn human values from observations.

So if you can't build a supervised model, you think going to unsupervised learning will solve your problems? The quote I gave you is part of human values -- humans do value triumph over their enemies. Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values. Why do you think your values will be preferred by the AI to values of, say, ISIS or third-world Maoist guerrillas? They're human, too.