Whether you call it new knowledge or not it not relevant.
Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.
Actually, writers sometimes start a book and find part way through that they don't want to finish,
I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.
@Sanderson: you understand what kind of thing he's doing pretty well. writers are a well known phenomenon. the less you know what processes he uses to write, what tradition he's following -- in general what's going on -- the less you can make any kind of useful predictions.
If it decides that human existence is non-optimal for it to carry out its goals
why would it?
Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster). Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast.
Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.
Then the define the term.
I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.
So what? How is that at all relevant. It isn't 100% guaranteed that if I jump off a tall building that I will then die. That doesn't mean I'm going to try. You can't use the fact that something isn't definite as an argument to ignore the issue wholesale.
...Deutsch doesn't think AGI's will do fast recursive self-improvement. The
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks