The event horizon thesis states that, once superintelligence arises, the result will be alien and unpredictable in a way qualitatively different from the results of other technological advances. In this view, we cannot see beyond the singularity, just as we cannot see beyond a black hole's event horizon.
Eliezer Yudkowsky names this idea as one of the three singularity schools, attributing it to Vernor Vinge, who describes the singularity as "a point where our models must be discarded and a new reality rules".
An argument in favor of such unpredictability goes as follows. Suppose you could always predict what a superintelligence would do. Then you, yourself, would be a superintelligence — but you are not a superintelligence.
However, this argument does not rule out all predictions. In particular, if we can predict what a superintelligence's goals will be, we can predict that it will probably achieve those goals, even if we don't know by what method. The predictions involved in Friendly AI tend to be of this nature.