Yes, but it's the worst sort of manifestation of this sort of behavior; if someone will attempt to generate conflict by nitpicking when they could so easily have interpreted the argument themselves in such a way as to render it unnecessary, can they be trusted to take arguments as seriously as they deserve to be?
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks