Can't the perception/probability estimate module just be treated as an interchangeable black box, regardless of whether it is a DNN, or MCTS Solomov induction approximation, or Bayes nets or anything else?
Not necessarily. If the goal component what's to respect human preferences, it will be vital that the perception component isn't going to correctly identify what constitutes a "human".
This doesn't seem like a major problem, or one which is exclusive to friendliness - computers can already recognise pictures of humans, and any AGI is going to have to be able to identify and categorise things.
There have been a couple of brief discussions of this in the Open Thread, but it seems likely to generate more so here's a place for it.
The original paper in Nature about AlphaGo.
Google Asia Pacific blog, where results will be posted. DeepMind's YouTube channel, where the games are being live-streamed.
Discussion on Hacker News after AlphaGo's win of the first game.