It's not correct to assume a statement i make is wrong, based on your prior about how much I know about chess. I used my own knowledge of how much i know about chess when making statements. you should respect that knowledge instead of ignore it and assuming i'm making basic LCPW mistakes (btw Popper made that same point too, in a different way. of course i know it.). or at least question my statement instead of assuming i'm wrong about how much i know about chess. you're basically assuming i'm an idiot who makes sloppy statements. if you really think that you shouldn't even be talking to me.
btw i've noticed you didn't acknowledge your other mistakes or apologize. is that because you refuse to change your mind, or what?
you should respect that knowledge instead of ignore it and assuming i'm making basic LCPW mistakes
It is easily observable in this thread that you are making LCPW mistakes. You haven't solved the game of chess, therefore the Least Convenient Possible World contains an AI powerful enough to explore the entire game tree of chess, solve the game, and beat you every time.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks