You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on Open thread, Oct. 10 - Oct. 16, 2016 - Less Wrong Discussion

3 Post author: MrMind 10 October 2016 07:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 13 October 2016 02:22:08PM *  0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly.

Of course you do. You test it. You show it a lot of images (that it hasn't seen before) of dogs and not-dogs and check how good it is at differentiating them.

How would that process work for an AI and human values?

the principle of letting AIs learn models from real world data

Right, human values: “A man's greatest pleasure is to defeat his enemies, to drive them before him, to take from them that which they possessed, to see those whom they cherished in tears, to ride their horses, and to hold their wives and daughters in his arms.”

Comment author: Houshalter 14 October 2016 06:23:52AM 0 points [-]

Do you expect me to give you the complete solution to AI right here, right now? What are you even trying to say? You seem to be arguing that FAI is impossible. How can you possibly know that? Just because you can't immediately see a solution to the problem, doesn't mean a solution doesn't exist.

I think an AI will easily be able to learn human values from observations. It will be able to build a model of humans, and predict what we will do and say. It certainly won't base all it's understanding on a stupid movie quote. The AI will know what you want.

Comment author: Lumifer 14 October 2016 02:26:41PM 0 points [-]

What are you even trying to say?

I'm saying that if you can't recognize Friendliness (and I don't think you can), trying to build a FAI is pointless as you will not be able to answer "Is it Friendly?" even when looking at it.

I think an AI will easily be able to learn human values from observations.

So if you can't build a supervised model, you think going to unsupervised learning will solve your problems? The quote I gave you is part of human values -- humans do value triumph over their enemies. Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values. Why do you think your values will be preferred by the AI to values of, say, ISIS or third-world Maoist guerrillas? They're human, too.