For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.
But why would humans ever want to build a staple maximizer? Let's not forget, staples:
Nobody said humans would build one deliberately. Some goober at the SIAI puts a 1 where a 0 should be and BAM!, next thing you know you're up to your eyebrows in staples.
I'm not sure making voting public would improve voting quality (i.e. correlation between post quality and points earned), because it might give rise to more reticence to downvote, and more hostility between members who downvoted each others' posts.
I'd expect that any AGI (originating and interested in our universe) would initiate an exploration/colonization wave in all directions regardless of whether it has information that a given place has intelligent life, so broadcasting that we're here doesn't make it worse. Expecting superintelligent AI aliens that require a broadcast to notice us is like expecting poorly hidden aliens on flying saucers, the same mistake made on a different level. Also, light travels only so quickly, so our signals won't reach very far before we've made an AGI of our own (one way or another), and thus had a shot at ensuring that our values obtain significant control.
Then it would've been trivial to leave at least one nanomachine and a radio detector in every solar system, which is all it takes to wipe out any incipient civilizations shortly after their first radio broadcast.
But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials.
In fact, as Eliezer never tires of pointing out, the space of unfriendliness is much larger than the space of friendliness.
But as Eliezer has pointed out in Humans In Funny Suits, we should be wary of irrationally anthropomorphizing aliens. Even if there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a really powerful optimization process.
The creation of a powerful optimization process is a distraction here - as Eliezer points out in the article you link, and in others like the "Three Worlds Collide" story, aliens are quite unlikely to share much of our value ...
There is no reason for any alien civilization to ever raid earth for its resources, if they did not first raid all the other stuff that is freely and unclaimed available in open space. Wiping us out to avoid troublemakers on the other hand is reasonable. I recently read Heinleins 'the Star Beast' where the United Federation Something regularly destroys planets for being dangerous.
My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans.
I agree with the main point of your article but I think this is an unjustifiable (but extremely common) belief. There are plenty of ways for human civilizations to survive in stable, advanced forms besides the ways that have been popular in the West for the last couple centuries. For instance:
Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.
Well put. Certainly if humans achieve a positive singularity we'll be very interested in containing other intelligences.
Re: "I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public."
I don't really see how these comments are misleading.
Isn't the problem with friendly extraterrestials analogous to Friendly AI? (In that they're much less likely than unFriendly ones).
The aliens can have "good" intentions but probably won't share our values, making the end result extremely undesirable (Three Worlds Collide).
Another option is for the aliens to be willing to implement something like CEV toward us. I'm not sure how likely is that. Would we implement CEV for Babyeaters?
Any society capable of communicating is presumably the product of a significant amount of evolution. There will always (?) be a doubt whether any simulation will be an accurate representation of objective reality, but a naturally evolved species will always be adapted to reality. As such, unanticipated products of actual evolution have the potential to offer unanticipated insights.
For the same reason we strive to preserve bio-diversity, I believe that examination of the products of separate evolutions should always be a worthwhile goal for any inquisitive being.
I'd be really surprised if friendly aliens could give us much useful help-- maybe not any.
However, contacting aliens who aren't actively unfriendly (especially if there's some communication) could enable us to learn a lot about the range of what's possible.
And likewise, aliens might be interested in us because we're weird by their standards. Depending on their tech and ethics, the effect on us could be imperceptible, strange and/or dangerous for a few individuals, mere samples of earth life remaining on reservations, or nothing left.
Just for the hell of it...
AFAIK there's currently no major projects attempting to send contact signals around the galaxy (let alone the universe). Our signals may be reaching Vega or some of the nearest star systems, but definitely not much farther. It's not prohibitively difficult to broadcast out to say, a 1000 lightyear radius ball around earth, but you're still talking about an antenna that's far larger than anything currently existing.
Right now the SETI program is essentially focused on detection, not broadcasting. Broadcasting is a much more expensive problem. Detection is f...
If intelligent aliens arise due to evolution they'll likely be fairly close to humans in mindspace compared to the entire size possible. In order to reach a minimal tech level, they'll likely need to be able to cooperate, communicate, empathize, and put off short-term gains for long-term gains. That already puts them much closer to humans. There are ways this could go wrong (for example, a species that uses large hives like ants or termites). And even a species that close to us in mindspace could still pose massive existential risk.
Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer.
If communication is practical and travel is not then that may be in argument in favor of attempting contact. Friendly aliens could potentially be very helpful to us simply by communicating some information. It's harder (but by no means impossible) to see how unfriendly aliens could cause us harm by communicating with us.
According to The Sunday Times, a few months ago Stephen Hawking made a public pronouncement about aliens:
Though Stephen Hawking is a great scientist, it's difficult to take this particular announcement at all seriously. As far as I know, Hawking has not published any detailed explanation for why he believes that contacting alien races is risky. The most plausible interpretation of his announcement is that it was made for the sake of getting attention and entertaining people rather than for the sake of reducing existential risk.
I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public. My friend pointed out that a sophisticated version of the concern that Hawking expressed may be justified. This is probably not what Hawking had in mind in making his announcement, but is of independent interest.
Anthropomorphic Invaders vs. Paperclip Maximizer Invaders
From what Hawking says, it appears as though Hawking has an anthropomorphic notion of "alien" in mind. My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans. I don't imagine such humans behaving toward extraterrestrials the way that the Europeans who colonized America behaved toward the Native Americans. By analogy, I don't think that anthropomorphic aliens which developed to the point of being able to travel to Earth would be interested in performing a hostile takeover of Earth.
And even ignoring the ethics of a hostile takeover, it seems naive to imagine that an anthropomorphic alien civilization which had advanced to the point of acquiring the (very considerable!) resources necessary to travel to Earth would have enough interest in the resources on Earth in particular to travel all to travel all the way to Earth to colonize Earth and acquire these resources.
But as Eliezer has pointed out in Humans In Funny Suits , we should be wary of irrationally anthropomorphizing aliens. Even if there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a really powerful optimization process . Such an optimization process could very well be a (figurative) paperclip maximizer . Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends. For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.
The sign of the expected value of Active SETI
It would be very bad if Active SETI led an extraterrestrial paperclip maximizer to travel to Earth to destroy intelligent life on Earth. Is there enough of an upside to Active SETI to justify Active SETI anyway?
Certainly it would be great to have friendly extraterrestrials visit us and help us solve our problems. But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials. Moreover, there seems to be a strong asymmetry between the positive value of contacting friendly extraterrestrials and the negative value of contacting unfriendly extraterrestrials. Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer. It seems if we successfully communicated with friendly extraterrestrials at this time, by the time that they had a chance to help us, we'd already be extinct or have solved our biggest problems ourselves. By way of contrast, communicating with unfriendly extraterrestrials is a high existential risk regardless of how long it takes them to receive the message and react.
In light of this, I presently believe that expected value of Active SETI is negative. So if I could push a button to stop Active SETI until further notice then I would.
The magnitude of the expected value of Active SETI and implication for action
What's the probability that continuing to send signals into space will result in the demise of human civilization at the hands of unfriendly aliens? I have no idea, my belief on this matter is subject to very volatile change. But is it worth it for me to expend time and energy analyzing this issue further and advocating against Active SETI? Not sure. All I would say is that I used to think that thinking and talking about aliens is at present not a productive use of time, and the above thoughts have made me less certain about this. So I decided to write the present article.
At present I think that a probability of 10-9 or higher would warrant some effort to spread the word, whereas if the probability is substantially lower than 10-9 then this issue should be ignored in favor of other existential risks.
I'd welcome any well considered feedback on this matter.
Relevance to the Fermi Paradox
The Wikipedia page on the Fermi Paradox references
The possibility of extraterrestrial paperclip maximizers together with the apparent asymmetry between the upside of contact with friendly aliens and the downside of contact with unfriendly aliens pushes in the direction that the reason for the Great Silence is because intelligent aliens have deemed it dangerous to communicate .