Basically, you can't predict the moves of a chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.
As someone who has beaten chess programs, I have noticed that this sentence as written is false. Would you care to refine it so that it's no longer straightforwardly false?
-5 points seems harsh for a statement that is technically correct.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks