Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.
Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).
Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.
Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.
The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.
Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).
Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously -- and combining the results to reach some goal -- I would call such systems truly intelligent.
Having said that, I don't think I need an insight or explanation here -- because well, I mostly agree with you or jacob_cannel -- it's likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can't be certain, therefore assign non-zero probability for the conjunction to be possible.
"can be programmed to" is not the same thing as intelligence. It requires external intelligence to program it. Using the same pattern, I could say that atoms are intelligent (and maybe sort-of conscious), because for almost any human task, they can be rebuilt into something that does it.