I disagree. It is a good example where it is obvious or close to obvious what was intended. The remark simply damaged the signal to noise ratio while avoiding grappling with the point.
True - but I don't think it would ordinarily have been down-voted that hard, for that sin.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks