Note, sending probes out any distance may increase computational requirements. Approximations are no longer sufficient when an agent's eye comes up very close to them. Unless we can expect the superintelligence to detect these signs from a great distance, from the home star, it might not afford to see them.
Also worth considering: Probes that close their eyes to everything but life-supporting planets, so that it wont notice the low grain of approximations and approximations can continue to be used in its presence.
For a moment lets assume there is some alien intelligent life on our galaxy which is older than us and that it have succeeded in creating super-intelligent self-modifying AI.
Then what set of values and/or goals it is plausible for it to have, given our current observations (I.e. that there is no evidence of it`s existence)?
Some examples:
It values non-interference with nature (some kind of hippie AI)
It values camouflage/stealth for it own defense/security purposes.
It just cares about exterminating their creators and nothing else.
Other thoughts?