You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gurkenglas comments on Risks from Approximate Value Learning - Less Wrong Discussion

1 Post author: capybaralet 27 August 2016 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 28 August 2016 10:34:44AM 1 point [-]

I only said that it would reduce chance of stupid decisions resulting from not understanding basic human words and values. But it would not reduce chances of deliberately malicious AI.

There are (at least) two different type of UFAI: real UFAI and failed FAI. Failed FAI wanted to be good but failed, and the best example of it smile maximizer which will cover all Solar system with smiles. (Paperclip maximizer also is some form of failed FAI, as initial goal was positive - produce many paperclips)

So it is not full recipe for real FAI, but just one way of value learning

Comment author: Gurkenglas 03 September 2016 10:40:05AM 0 points [-]

You confuse the stupidity of whoever set the goals with the stupidity of the AI afterward. Any AGI is going to understand what we actually want, it just doesn't care, if the goal it was given wasn't already smart enough.