Then the define the term.
See the essay on knowledge: http://fallibleideas.com/
Or read Deutsch's books.
It isn't 100% guaranteed that if I jump off a tall building that I will then die.
Indeed. You're the one who told me that writers sometimes don't finish books... They aren't 100% guaranteed to. I know that. Why did you say that?
Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small.
Umm. Imagine a human does the same thing. What's your point? My/Deutsch's point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
We're pathetic sacks of meat that can't even multiply four or five digits numbers in our heads.
That's not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?
The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.
And yes, I've read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?
...
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks