Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that "type" isn't a well-defined notion in this context.)
Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.
Actually, writers sometimes start a book and find part way through that they don't want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brandon Sanderson's next Mistborn book will be awesome. I can predict that it will sell well, and get good reviews. I can even predict a fair number of plot points just based on stuff Sanderson has done before and various comments he has made. But, at the same time, I can't write a novel nearly as well as he does, and if he and I were to have a novel writing contest, he will beat me. I don' t know how he will beat me, but he will.
Similarly, a sufficiently smart AI has the same problem. If it decides that human existence is non-optimal for it to carry out its goals, then it will try to find ways to eliminate us. It doesn't matter if all the ways it comes up with of doing so are in a fairly limited set of domains. If it is really good at chemistry is might make nasty nanotech to reduce organic life into constituent atoms. If it is really good at math it might break all our cryptography, and then hack into our missiles and trigger a nuclear war (this one is obvious enough that there are multiple movies about it). If it is really good at social psychology it might manipulate us over a few years into just handing over control to it.
Just as I don't know how Kasparov will beat me but I know he will, I don't know how a sufficiently intelligent AI will beat me, but I know it will. There may be issues with how sufficiently intelligent it needs to be and whether or not an AGI will be likely undergo fast, substantial, recursive self-improvement to get to be that intelligent is an issue of much discussion on LW (Eliezer considers it likely. Some other people such as myself consider it to be unlikely.) but the basic point about sufficient intelligent seems clear.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks