It doesn't change that you imagined a convenient world where i'm bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details.
On an absolute scale, you are bad at chess.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks