Of course, I never wrote the “important” story, the sequel about the first amplified human. Once I tried something similar. John Campbell’s letter of rejection began: “Sorry—you can’t write this story. Neither can anyone else.”...
Bookworm, Run!” and its lesson were important to me. Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem writers face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding. -- Vernor Vinge, True Names and other Dangers, p. 47.
Vingean unpredictability is an aspect of the way we think about and predict a consequentialist intelligence which we think is smarter than us in a domain: we think we can't predict exactly what an agent like that will do, because if we could, we would be that smart ourselves. For example, if you can predict exactly what action Deep Blue will take on a chessboard, you can play as well as Deep Blue by just making whatever chess move you predict Deep Blue would make in your shoes. Thus, Deep Blue's programmers necessarily sacrificed their ability to predict Deep Blue's exact moves in advance using their own intelligence, in the course of creating a superhuman chessplayer. But this doesn't mean Deep Blue's programmers were confused about the abstract criterion on which Deep Blue chose actions, or that Deep Blue's programmers couldn't predict in advance that Deep Blue would try to win rather than lose chess games.
Work_in_progress_meta_tag & Stub_meta_tagcan't predict exact move but can predict consequences
belief in goal and VU itself can be obtained by abstract reasoning
consequences for Vingean reflection - need to establish trust in offspring via abstraction
we think EU[A] = 4 and EU[B] = 7. then we see agent do A. we conclude with strong confidence that EU[A] > EU[B], and suspect weakly that EU[A] > 7, EU[B] < 4, or 4 < EU[B] < EU[A] < 7. Even more weakly, we could have EU[B] < 4 < EU[A] or EU[B] < 7 < EU[A]. (this also goes under instrumental efficiency)
can also get compact utilities by observation, then deduce intelligence; can also get convergent goals and thereby deduce intelligence.
Vingean unpredictability in a domain depends somewhat on it being a sufficiently rich domain.