To summarize using information security language:
"Passive SETI exposes an attack surface which accepts unsanitized input from literally anyone, anywhere in the universe. This is very risky to human civilization.'
While these ideas are interesting I think there are many reasons not to worry about SETI. The first is that I find the "malicious signal" attack very implausible to begin with. Even if the simple plain-text message "Their is no God" would be enough to wipe out a typical civilization I still think the aliens don't stand much of a chance. How could they create a radio signal that carries that exact meaning to a majority of all possible civilizations that could find the broadcast? And this is a scenario where the cards are stacked in the aliens favor by assuming such a low-data packet can wipe us out. A powerful AI would be a much larger piece of data: which multiplies all of the difficulties of sending it to an unknown civilization.
My second reason is that I think singling out SETI specifically is unfair. We are looking at all kinds of space data the whole time. Radio telescopes, normal telescopes and now even gravity wave detectors. Almost all of these devices are aimed at understanding natural processes. If you were some aliens who DID have the ability to send a malicious death-message then your message might be detected by SETI, but its just as likely to be detected by someone else first. Someone notices something odd, maybe "gamma ray bursts" from the galactic center. They investigate what (assumedly natural) mechanism might cause them, then Oh no! Someone put the spectrum of a gamma ray burst into the computer, but its Fourier series contained the source code of an AI that then started spontaneously running on the office computer before escaping into the internet to start WW3.
Your second paragraph seems unpersuasive to me. I would think that designing a program that can wipe out a civilization conditional on that civilization intentionally running it would be many orders of magnitude easier than designing a program that can wipe out a civilization when that civilization tries to analyze it as pure data.
Both things would require that you somehow make your concepts compatible with alien information systems (your first counter-argument), but the second thing additionally requires that you exploit some programming bug (such as a buffer overflow) in a system you have never examined. That seems to me like it would require modeling the unmet aliens in a much higher degree of detail.
Now, you could argue that if an astronomer who is attempting to analyze "gamma ray bursts" accidentally discovers an alien signal, that they are just as likely as SETI to immediately post it to the Internet. But suggesting they would accidentally execute alien code, without realizing that it's code, seems like a pretty large burdensome detail.
(Contrariwise, conditional on known-alien-source-code being posted to the Internet, I would say the probability of someone trying to run it is close to 1.)
Curated
I'd been vaguely aware of SETI related x-risks before, but this post summarized both some past work and introduced new considerations in a fairly compelling way.
I don't know that I buy the set of fermi-calculations towards the end about how risky SETI is, but each of the considerations listed throughout the article made sense as a thing-to-consider.
I also appreciated two comments that helped crystallize this for me: RedMan's note that "Passive SETI exposes an attack surface which accepts unsanitized input from literally anyone, anywhere in the universe. This is very risky to human civilization", as well as Ben's note that it's a bit weird to single out SETI when we have tons of astronomy labs listening to the universe all the time for all kinds of reasons which could also be attack vectors.
How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language? Everyone posits things like "they will use universal principles of math in their volume 0 messages" but how do you bootstrap from that to meaning when you have nothing but their signal? Are we even vaguely sure this is possible?
When we do language pairing in the real world it's between beings that share almost identical compute and sensor topologies, and we can 'point' to shared objects that we can both observe with our sensors. This creates a shared context that you don't get from a series of amplitude/phase changes from a transmitter many lightyears away.
And then ok, say you get this far. Who will run alien code shared only as 'bytecode' without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand?
In fact it seems kinda obvious that any 'news' aliens blast to us from that distance has to be a self replicating parasite. Why else would they invest the resources? We should know that it's highly dangerous data and treat it accordingly.
A proposal for how to initiate communication with aliens, starting from just mathematics, is Hans Freudenthal's "Lincos: Design of a Language for Cosmic Intercourse, Part 1". (No part 2 ever appeared.)
A fictional example of receiving a dangerous message from the stars is Piers Anthony's "Macroscope".
How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language?
Some people have already proposed ways of doing this. For example, in 1960 Hans Freudenthal described Lincos, which is intended to be readily understandable by intelligent aliens on the receiving end of interstellar contact. Maybe he succeeded, maybe he didn't, but I don't think the problem is very hard in an absolute sense. Extremely technologically advanced aliens should be able to solve this problem.
Since aliens inhabit the same physical universe we do, and likely evolved via natural selection, then it's very likely they will share a few key cognitive concepts with us. Of course, maybe you think I'm making unjustified assumptions here, but I'd consider these to be among the least objectionable assumptions in this whole framework.
Who will run alien code shared only as 'bytecode' without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand? [...] We should know that it's highly dangerous data and treat it accordingly.
As I pointed out in the post, the official protocol of the SETI Institute recommends that we immediately flood the internet with any alien messages we receive. It's going to be pretty hard to prevent people from running code, once that happens.
The irony is that you immediately recognized this as a bad policy. But that's exactly my point.
Don't get me wrong, I'd love to be wrong because smart people have already thought about this, and instantly realized the flaw in running arbitrary computer programs sent to us by aliens without even a token effort to review the programs first. But, uhh, this is not an area our civilization has been particularly good at thinking about so far.
"likely evolved via natural selection"
My default expectation would be that it's a civilization descended from an unaligned AGI, so I'm confused why you believe this is likely.
A guess: you said you're optimistic about alignment by default -- so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
A guess: you said you're optimistic about alignment by default -- so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
In the context of this comment, I don't think it really matters whether the alien AGIs are aligned or not. The point is whether they will share cognitive concepts with us. I think AIs will share at least a few cognitive concepts even if they're very misaligned with us. It's kind of hard for me to imagine this not being true (aren't they living in the same universe?). That said, I admit that point about evolution wasn't very strong; I mostly meant that aliens would descend from some precursor species that evolved via natural selection. The much stronger argument is that aliens will share some cognitive concepts because there's a natural set of concepts in the universe, such as the concept of "atoms".
One way of sending data is following: Aliens can send an easily recognisable 2D images using principles of TV: line ending symbols every n bites. Using pictures, they will send blueprints of a simple computer, like a Turing machine, and then a code for it. This computer will draw and adapt a blueprint of a more efficient computer, which can run more complex code which will be AI.
The AI would still need to be relatively simple or trained after being built, right, if they wouldn't be able to encode billions of parameter values in the images?
The complexity of the human genome puts a rough upper bound on how many parameters would be required to specify an AGI (it will have more learned parameters, once deployed). Of course, a superintelligence capable of taking over the world is harder to bound.
One further thing to note is that an alien AI might require a lot of memory and processing power to perform its intended task. As I wrote in the post, this is one reason to suppose that aliens might want to target civilizations after they have achieved a certain level of technological development. Because otherwise their scheme might fail.
I am happy that the idea of SETI risk is getting more traction. I completely agree with your conclusion that Grabby aliens model is an argument in favor of the SETI risk.
Fascinating subject.
If anyone’s interested, these issues feature in an excellent sci-fi trilogy by a Chinese author, Cixin Liu, called The Three Body Problem.
The idea of alien civs taking pre-emptive aggressive action to neutralise new civs (which may in time pose a threat) is covered, as is an ingenious way of combatting this.
The novels are very scientifically literate as well as being an engrossing read.
there remains a credible possibility that grabby aliens would benefit by sending a message that was carefully designed to only be detectable by civilizations at a certain level of technological development
oh wow, after reading this, I came up with the same explanation you wrote in the following 2 paragraphs just before reading them 😄
What's the basis for believing that intelligent alien life is likely to exist elsewhere in the universe? We have one data point for coordinated intelligent life coming about, and we don't even know how much additional extinction risk lies between us and becoming interstellar.
I don't know how you would even begin to try and calculate those odds.
From what I've seen the arguments for alien life come down to their being a lot of planets and a lot of time, but we have no idea what number we're balancing with that. There is no way to establish a lower-bound on how unlikely the emergence of life is. The only evidence is an absence of evidence.
I am willing to bet 3% of my net worth that we will not be contacted by aliens in the next 1000 years.
I have no particular opinion on SETI x-risk, but I have just decided that if I ever make a sci-fi 4X game, it should include memetic weapons.
Even granting that there are grabby aliens in your cosmic neighborhood (click here to chat with them*), I find the case for SETI-risk entirely unpersuasive (as in, trillionths of a percent plausible, or indistinguishable from cosmic background uncertainty), and will summarize some of the arguments others have already made against it and some of my own. I think it is so implausible that I don't see any need to urge SETI to change their policy. [Throwing in a bunch of completely spitballed, mostly-meaningless felt-sense order-of-magnitude probability estimates.]
Parsability. As Ben points out, conveying meaning is hard. Language is highly arbitrary; the aliens are going to know enough about human languages to have a crack at composing bytestrings that compile executable code? No chance if undirected transmission, 1% if directed transmission intended to exploit my civilization in particular.
System complexity. Dweomite is correct that conveying meaning to computers is even harder. There is far too much flexibility, and far too many arbitrary and idiosyncratic choices made in computer architectures and programming languages. No chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
Transmission fidelity. If you want to transmit encrypted messages or program code, you can't be dropping bits. Do you know what frequency I'm listening on, and what my sample depth is? The orbital period of my planet and the location of my telescope? What the interplanetary and terrestrial weather conditions that day are going to be, you being presumably light-years away or you'd have chosen a different attack vector? You want to mail me a bomb, but you're shipping it in parts, expecting all the pieces to get there, and asking me to build it myself as well? 0.01% chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Compute. As MichaelStJules's comment suggests, if the compute needed to reproduce powerful AI is anything like Ajeya's estimates, who cares if some random asshole runs the thing on their PC? No chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Information density. Sorry, how much training is your AI going to have to do in order to be functional? Do you have a model that can bootstrap itself up from as much data as you can send in an unbroken transmission? Are you going to be able to access the hardware necessary to obtain more information? See above objections. There's terabytes of SETI recordings, but probably at most megabytes of meaningful data in there. 1% chance if undirected, 100% if directed, conditioning on all above conditions being fulfilled.
Inflexible policy in the case of observed risk. If the first three lines look like an exploit, I'm not posting it on the internet. Likewise, if an alien virus I accidentally posted somehow does manage to infect a whole bunch of people's computers, I'm shutting off the radio telescope before you can start beaming down an entire AI, etc, etc. (I don't think you'd manage to target all architectures with a single transmission without being detected; even if your entire program was encrypted to the point of indistinguishability from entropy, the escape code and decrypter are going to have to look like legible information to anyone doing any amount of analysis.) Good luck social engineering me out of pragmatism, even if I wasn't listening to x-risk concerns before now. 1% chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
So if you were an extraterrestrial civilization trying this strategy, most of the time you'd just end up accomplishing nothing, and if you even got close to accomplishing something, you'd more often be alerting neighboring civilizations about your hostile intentions than succeeding. Maybe you'd have a couple lucky successes. I hope you are traveling at a reasonable fraction of C, because if not you've just given your targets a lot of advance warning about any planned invasion.
I just don't think this one is worth anyone's time, sorry. I'd expect any extraterrestrial communications we receive to be at least superficially friendly, and intended to be clearly understood rather than accidentally executed, and the first sign of hostility to be something like a lethal gamma-ray burst. In the case that I did observe an attempt to execute this strategy, I'd be highly inclined to believe that the aliens already had us completely owned and were trolling us for lolz.
*Why exactly did you click on a spammy-looking link in a comment on the topic of arbitrary code execution?
Wouldn't attempting to destroy technologically-capable-but-not-yet-AI-ruled civilization through information hazard attack be just too narrow? It's a vulnerable state that likely only lasts sub 100 years (based on a sample of 1).
However, I must agree that SETI's information disclosure policy is extremely irresponsible.
It might be that interstellar travel is really really hard even for AI-ruled civilizations. In that case, sending such messages is the only way to spread a civilization.
This is very interesting. My only comment/question for the moment at least is: do we have to keep saying 'grabby'?
I know you are following Hanson et. al in using it, but people are already very resistant to engaging with this type of "far out" discussion. It really doesn't make it any easier to take the issue seriously when the best material you can show them is full of phrases such as "grabby alien expansion is constrained by a number of factors", and big graphics with 'Grabby Origin' at the centre.
Curious if you have suggestions for a replacement term for "grabby" that you'd feel better about?
'Expansionist'? I hadn't thought about it, particularly. It just struck me how silly it was making everything sound, even to me. I can only imagine how quickly it would be dismissed by my more weirdness-resistant friends, who unfortunately are representative of a lot of the world, including many of the people we need to persuade to take these things seriously.
The idea that "grabby aliens" pose a danger assumes that we are not a grabby alien colony.
Suppose that the hardest step towards technological intelligence is that from chimpanzee-level cognition to human-level cognition. And suppose that grabby aliens are not concerned with propagating their biological substrate, but rather their cognitive architecture. Then their approach to expanding could be to find worlds where chimpanzee-level beings have already evolved, and "uplift" them to higher-level cognition. This is a theme of a series of novels by David Brin.
But now, unlike Brin, suppose that they believe that the best approach for the new creatures with higher-level cognition to mature is to just leave them alone (until they reach some threshold known only to the grabby aliens). Then we could be a grabby alien colony without knowing it, and hence have nothing to fear from the grabby aliens. (At least nothing to fear along the usual lines.)
And suppose that grabby aliens are not concerned with propagating their biological substrate, but rather their cognitive architecture. Then their approach to expanding could be to find worlds where chimpanzee-level beings have already evolved, and "uplift" them to higher-level cognition.
I think a wide variety of strategies can be imagined that would be far more effective at colonizing the universe, in the sense of squeezing out the most computations per unit of matter. Direct manufacturing of computing hardware aided by autonomous self-replicating AI around every star system would probably work well.
More generally, the insight of grabby aliens is that aliens should be big and visible, rather than quiet and isolated (as they're traditionally depicted). This puts big constraints on what should be possible, assuming we buy the model.
Maybe a Dyson sphere consisting of a cloud of self-replicating nanomachines works better that a planet with biological organisms. But remember, whatever one might think from reading lots of posts on lesswrong, that's not actually a proven technology, whereas biology is (although "uplifting" isn't).
One issue is robustness to occasional catastrophes. If I may reference another work of fiction, there's The Outcasts of Heaven Belt, by Joan Vinge.
SETI stands for the search for extraterrestrial intelligence. A few projects, such as Breakthrough Listen, have secured substantial funding to observe the sky and crawl through the data to look for extraterrestrial signals.
A few effective altruists have proposed that passive SETI may pose an existential risk to humanity (for some examples, see here). The primary theory is that alien civilizations could continuously broadcast a highly optimized message intended to hijack or destroy any other civilizations unlucky enough to tune in. Many alien strategies can be imagined, such as sending the code for an AI that takes over the civilization that runs it, or sending the instructions on how to build an extremely powerful device that causes total destruction.
Note that this theory is different from the idea that active SETI is harmful, ie. messaging aliens on purpose. I think active SETI is substantially less likely to be harmful, and yet it has received far more attention in the literature.
Here, I collect my current thoughts about the topic, including arguments for and against the plausibility of the idea, and potential strategies to mitigate existential risk in light of the argument.
In the spirit of writing fast, but maintaining epistemic rigor, I do not come to any conclusions in this post. Rather, I simply summarize what I see as the state-of-the-debate up to this point, in the expectation that people can build on the idea more productively in the future, or point out flaws in my current assumptions or inferences.
Some starting assumptions
Last year, Robin Hanson et al. published their paper If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare. I consider their paper to provide the best available model to-date on the topic of extraterrestrial intelligence and the Fermi Paradox (along with a very similar series of papers written by S. Jay Olson previously). You can find a summary of the model from Robin Hanson here, and a video-summary here.
The primary result of their model is that we can explain the relatively early appearance of human civilization—compared to the total lifetime of the universe—by positing the existence of so-called grabby aliens that expand at a large fraction of the speed of light from their origin. Since grabby aliens quickly colonize the universe after evolving, their existence sets a deadline for the evolution of other civilizations. Our relative earliness may therefore be an observation-selection effect arising from the fact that civilizations like ours can't evolve after grabby aliens have already colonized the universe.
Assuming this explanation of human earliness is correct, and we are not an atypical civilization among all the civilizations that will ever exist, we should expect much of the universe to already be colonized by grabby aliens by now. In fact, as indicated by figure 13 in the paper, such alien volumes should appear to us to be larger than the full moon.
Given the fact that we do not currently see any grabby aliens in our sky, Robin Hanson concludes that they must expand quickly—at more than half the speed of light. He reaches this conclusion by applying a similar selection-effect argument as before: if grabby alien civilizations expanded slowly, then we would be more likely to see them in the night sky, but we do not see them.
However, the assumption that grabby aliens, if they existed, would be readily visible to observers, is arguable, as Hanson et al. acknowledge,
As I will argue, the theory that SETI is dangerous hinges crucially on the rejection of this assumption, along with the rejection of the claim that grabby aliens must expand at at velocities approaching speed of light. Together, these claims are the best reasons for believing that SETI is harmless. However, if we abandon these epistemic commitments, then SETI indeed may pose a substantial risk to humanity, making it worthwhile to examine them in greater detail.
Alien expansion and contact
Grabby aliens are not the only type of aliens that could exist. There could be "quiet" aliens that do not seek expansionist ends. However, in section 15 of their paper, Hanson et al. argue that in order for quiet aliens to be common, it must be that there is an exceptionally low likelihood that a given quiet alien civilization will transition to becoming grabby, which seems unjustified.
Given this inference, we should assume that the first aliens we come in contact with will be grabby. Coming into physical contact with grabby aliens within the next, say, 1000 years is very unlikely. The reason for this is that grabby aliens have existed, on average, for many millions of years, and thus, the only way we will encounter them physically any time soon is if we happened to right now be on the exact outer edge of their current sphere of colonization, which seems implausible (see figure 12 in Hanson et al. for a more quantified version of this claim).
It is far more likely that we will soon come into contact with grabby aliens by picking up signals that they sent in the distant past. Since grabby alien expansion is constrained by a number of factors (such as interstellar dust, acceleration, and deceleration), they will likely expand at a velocity significantly below the speed of light. This implies that there will be a significant lag between when the first messages from grabby aliens could have been received by Earth-based-observers, and the time at which their colonization wave arrives. The following image illustrates this effect, in two dimensions,
The orange region represents the volume of space that has already been colonized and transformed by a grabby alien civilization, and it has a radius of R1. By contrast, the light-blue region represents the volume of space that could have been receiving light-speed messages from the grabby aliens by now.
In general, the smaller the ratio R1/R2 is across all grabby aliens, the more likely it is that any given point in space will be in the light-blue region of some grabby alien civilization as opposed to the orange region. If we happen to be in the light-blue region of another grabby alien civilization, it would imply that we could theoretically tune in and receive any messages they decided to send out long ago.
Since the formula for the volume of a sphere is 4/3πr3, with a ratio of even 0.9—or equivalently, if grabby aliens expand at 90% the speed of light—only 72.9% of the total volume would be part of the orange region, with 28.1% belonging to the light-blue region. This presents a large opportunity for even very-rapidly expanding grabby alien civilizations to continuously broadcast messages, in order to expand their supremacy by hijacking civilizations that happen to evolve in the light-blue region. I think grabby aliens would perform a simple expected-value calculation and conclude that continuous broadcasting is worth the cost in resources. Correspondingly, this opportunity provides the main reason to worry that we might be hijacked by a grabby alien civilization at some point ourselves.
Generally, the larger the ratio R1/R2, the less credence we should have that we currently are in danger of being hijacked by incoming messages. At a ratio of 0.99, only 2.9% of the total volume is in the light-blue region.
In their paper, Stuart Armstrong and Anders Sandberg attempt to show, using relatively modest assumptions, that grabby aliens could expand at speeds very close to the speed of light. This is generally recognized to be the strongest argument against the idea that SETI is dangerous.
According to table 5 in their paper, a fusion-based rocket could let an expansionist civilization expand at 80% of the speed of light. However, if we're able to use coilguns instead, then we get to 99%, which is perhaps more realistic.
Still, not everyone is convinced. For example, in a thread from 2018 in response to this argument, Paul Christiano wrote,
I am not qualified to evaluate the plausibility of this assessment. That said, I think given that at least a few smart people seem to think that there is a non-negligible chance that near-light speed space colonization is unattainable, it is sensible to consider the risk of SETI seriously.
Alien strategy
Previously, I noted that another potential defeater to the idea that SETI is dangerous is that, if we were close enough to a grabby alien civilization to receive messages from them, they should already be clearly visible in the night sky, perhaps even with the naked eye. I agree their absence is suspicious, and it's a strong reason to doubt that there are any grabby aliens nearby currently.
However, given that we currently have very little knowledge about what form grabby alien structures might take, it would be premature to rule out the possibility that grabby alien civilization may simply be transparent to our current astronomical instruments. I currently think that making progress on answering whether this idea is plausible is one of the most promising ways of advancing this debate further.
One possibility that we can probably rule out is the idea that grabby aliens would be invisible if they were actively trying to contact us. Wei Dai points out,
My understanding is that, given a modest amount of energy relative to the energy output of a single large galaxy, grabby aliens could continuously broadcast a signal that would be easily detectable across the observable universe. Thus, if we were in the sphere of influence of a grabby alien civilization, they should have been able to contact us by now.
In other words, the fact that we haven't yet been contacted by grabby aliens implies that they either don't exist near us, or they haven't been trying very hard to reach us.
Case closed, then, right? SETI might be hopeless, but at least it's safe? Not exactly.
While some readers may object that we are straining credulity at this point—and for the most part, I agree—there remains a credible possibility that grabby aliens would benefit by sending a message that was carefully designed to only be detectable by civilizations at a certain level of technological development. If true, this would be consistent with our lack of alien contact so far, while still suggesting that SETI poses considerable risk to humanity. This assumption may at first appear to be baseless—a mere attempt to avoid falsification—but there may be some merit behind it.
Consider a very powerful message detectable by any civilization with radio telescopes. The first radio signals from space that humans ever decoded were received in 1932 and analyzed by Karl Guthe Jansky. Let's also assume that the best strategy for an alien hijacker is to send the machine-code for an AI capable of taking over the civilization that receives it.
In 1932, computers were extremely primitive. Therefore, if humans had received such a message back then, there would have been ample time for us to learn a lot more about the nature of the message before we had the capability of running the code on a modern computer. During that time, it is plausible that we would uncover the true intentions behind it, and coordinate to prevent the code from being run.
By contrast, if humans today uncovered an alien message, there is a high likelihood that it would end up on the internet within days after the discovery. In fact, the SETI Institute even recommends this as part of their current protocol,
As Paul Christiano notes, aliens will likely spend a very large amount of resources simulating potential contact events, and optimizing their messages to ensure the maximum likelihood of successful hijacking. While we can't be sure what strategy that implies, it would be unwise to assume that alien messages will necessarily take any particular character, such as being easily detectable, or clearly manipulative.
Would alien contact be good?
Alien motivations are extremely difficult to predict. As a first-pass model, we could model them as akin to paperclip maximizers. If they hijacked our civilization to produce more paperclips, that would bad from our perspective.
At the same time, Paul Christiano believes that there's a substantial chance that alien contact would be good on complicated decision-theoretic grounds,
Wei Dai, on the other hand remains skeptical about this argument. As for myself, I'm inclined to expect relatively successful AI alignment by default, making this point somewhat moot. But I can see why others might disagree and would prefer to take their chances running an alien program.
My estimate of risk
Interestingly, the literature on SETI risk is extremely sparse, even by the standards of ordinary existential risk work. Yet, while not rising anywhere near the level of probable, I think SETI risk is one of the more credible existential risks to humanity, other than AI. This makes it a somewhat promising target for future research.
To be more specific, I currently think there is roughly a 99% chance that one or more of the arguments I gave above imply that the risk from SETI is minimal. Absent these defeaters, I think there's perhaps a 10-20% chance that SETI will directly cause human extinction in the next 1000 years. This means I currently put the risk of human extinction due to SETI at around 0.1-0.2%. This estimate is highly non-robust.
Strategies for mitigating SETI risk
My basic understanding is that SETI has experienced very extreme growth in recent years. For a long time, potential alien messages, such as the Wow! signal, were collected very slowly, and processed by hand.
We now appear to be going through a renaissance in SETI. The Breakthrough Listen project, which began in 2016,
If we are indeed going through a renaissance, now would be a good time to advance policy ideas about how we ought to handle SETI.
As with other existential risks, often the first solutions we think of aren't very good. For example, while it might be tempting to push for a ban on SETI, in reality few people are likely to be receptive to such a proposal.
That said, there do appear to be genuinely tractable and robustly positive interventions on the table.
As I indicated above, the SETI Institute's protocol on how to handle confirmed alien signals seems particularly fraught. If a respectable academic wrote a paper carefully analyzing how to deal with alien signals, informed by the study of information hazards, I think there is a decent chance that the kind people at the SETI Institute would take note, and consider improving their policy (which, for what it's worth, was last modified in 2010).
If grabby aliens are close enough to us, and they really wanted to hijack our civilization, there's probably nothing we could do to stop them. Still, I think the least we can do is have a review process for candidate alien-signals. Transparency and openness are usually good for these types of affairs, but when there's a non-negligible chance of human extinction resulting from our negligence, I think it makes sense to consider creating a safeguard to prevent malicious signals from instantly going public after they're detected.