Chess playing programs don't create new knowledge.
So, the argument is wrong without me fixing it (human chess players do).
Small amounts of new knowledge in very limited areas is predictable. Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.
This doesn't have much to do with large scale prediction that depends on new types of knowledge, does it?
Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that "type" isn't a well-defined notion in this context.)
Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.
Actually, writers sometimes start a book and find part way through that they don't want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brand...
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks