I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.
Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.
Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.
1) I know that there is a phenomenon (that I call 'consciousness'), because I observe it directly.
2) I don't know a decent theory to explain what it really is, and what properties does it have.
3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as 'hard'.
Too many people, I've noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.
So if the most plausible says (intelligence => consciousness) is true, you shouldn't immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.
Ok, fair enough.
So, what you're really saying is that the aliens lack some indefinable trait that the humans consider "moral", and the humans lack a definable trait that the aliens consider moral.
This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.
Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.
Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.
Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).
Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.
Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.
The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.