skeptical_lurker comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 11 March 2016 05:56:23AM 2 points [-]

I may be missing something, but why does this matter? An AI has components, as does the human mind. When reasoning about friendliness, what matters is the goal component. Can't the perception/probability estimate module just be treated as an interchangeable black box, regardless of whether it is a DNN, or MCTS Solomov induction approximation, or Bayes nets or anything else?

Comment author: Torchlight_Crimson 11 March 2016 07:35:15AM 3 points [-]

Can't the perception/probability estimate module just be treated as an interchangeable black box, regardless of whether it is a DNN, or MCTS Solomov induction approximation, or Bayes nets or anything else?

Not necessarily. If the goal component what's to respect human preferences, it will be vital that the perception component isn't going to correctly identify what constitutes a "human".

Comment author: skeptical_lurker 11 March 2016 08:51:06AM 0 points [-]

This doesn't seem like a major problem, or one which is exclusive to friendliness - computers can already recognise pictures of humans, and any AGI is going to have to be able to identify and categorise things.

Comment author: bogus 11 March 2016 06:02:08PM 0 points [-]

computers can already recognise pictures of humans

Well, not quite.