Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.

Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).

Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.

Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.

The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.

New Comment
19 comments, sorted by Click to highlight new comments since:

The main problem with this is that it says that human beings are extremely unlike all nearby alien races. But if you willing to admit that humanity is that unique you might as well say that intelligence only evolved on earth, which is a much simpler and more likely hypothesis.

Assume that intelligence doesn't always imply consciousness

Taboo 'consciousness', and attempt to make that assumption still work.

So they decide to use their advanced technology to render their civilizations invisible for human scientists.

The feasibility of this idea is inversely proportional to the resource expenditure required to remain invisible. It is more likely that - if aliens exist - that they are naturally mostly-invisible as a result of computational optimization into compact cold dark arcilects. If stealth/invisibility plays a role, they are more likely to be hiding from other powerful civs rather than us.

Taboo 'consciousness', and attempt to make that assumption still work.

Taboo 'intelligence' as well.

There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.

If you don't already understand what I mean, small chance that I would be able to explain.

As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?

And no doubt that scenarios your mention are more plausible.

Do you consider your computer conscious?

Are (modern) computers intelligent but not conscious, by your lights?

If so, then there's a very important thing you might provide some insight into, which is what sort of observations humans could make of an alien race, that would lead to us thinking that they're intelligent but not conscious.

Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).

Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously -- and combining the results to reach some goal -- I would call such systems truly intelligent.

Having said that, I don't think I need an insight or explanation here -- because well, I mostly agree with you or jacob_cannel -- it's likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can't be certain, therefore assign non-zero probability for the conjunction to be possible.

"can be programmed to" is not the same thing as intelligence. It requires external intelligence to program it. Using the same pattern, I could say that atoms are intelligent (and maybe sort-of conscious), because for almost any human task, they can be rebuilt into something that does it.

[-][anonymous]00

If you dont know what you're talking about when you say consciousness, your premise becomes incoherent.

I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.

Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.

Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.

1) I know that there is a phenomenon (that I call 'consciousness'), because I observe it directly.

2) I don't know a decent theory to explain what it really is, and what properties does it have.

3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as 'hard'.

Too many people, I've noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.

So if the most plausible says (intelligence => consciousness) is true, you shouldn't immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.

[-][anonymous]00

Ok, fair enough.

So, what you're really saying is that the aliens lack some indefinable trait that the humans consider "moral", and the humans lack a definable trait that the aliens consider moral.

This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.

Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.

Are your aliens p-zombies?

I thought the defining feature of being a p-zombie was acting as if they had consciousness while not "actually" having it, whereas these aliens act as though they did not have consciousness.

(I think a generic and global intelligence-valuation ethos is very unlikely to arise, and so I think there are other reasons to dislike this formulation of the Galactic Zoo.)

I thought the defining feature of being a p-zombie was acting as if they had consciousness while not "actually" having it

It's more than just a matter of behavior. P-zombies are supposed to be physically indistinguishable from human beings in every respect while still lacking consciousness.

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

Why do you think it is unlikely?

Basically, the hierarchical control model of intelligence, which sees 'intelligence' as trying to maintain some perception at some reference level by actuating the environment. (Longer explanation here.) If you have multiple control systems, and they have different reference levels, then they will get into 'conflict', much like a tug of war.

That is, simple intelligence looks like it leads to rivalry rather than cooperation by default, and so valuing intelligence rather than alignment seems weird; there's not a clear path that leads from nothing to there.

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

"humans are monsters and any sane civilization avoids them, that's why Galactic Zoo"

Isn't the Galactic Zoo hypothesis based on wanting to maintain the humans in their primitive habitat, and not interfere with the "natural" development?

It's not that we're horrible monsters that need to be avoided. The Earth is just a nature preserve.

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

[-]tim-30

There's this extremely intelligent alien species that has evolved a distinct sense of morality very similar to our own, just more rigid. So rigid that they are incapable of even comprehending the way we might think. And we view killing them just as we view recycling computers.

What happens next?