It actually may work. But not because aliens will come to save us - there is no time. But because any signal we send in space will reach any star before intelligence explosion wave, and thus aliens may know potentially hostile nature of our AI.
Our AI will know all this, and if it wants to have better relation with aliens, it may invest some resources in simulating friendliness. Cheap for AI, cheap for us.
So imagine you're living like you are, today, and somehow some message appears from a remote tribe that has never had contact with civilization before. The message reads "help! We've discovered fire and it's going to kill us! Help! Come quick!"
What do you do?
I'm not sure this analogy teaches us much. A lot depends on what the surprise is - that there is a civilization there that knows how to communicate but hasn't yet, or that the civilization we've been leaving alone for Prime Directive reasons has finally discovered fire. A lot depends on whether we take that fear seriously as well.
The answer could be any of:
I'm not sure how you envision "sending signals into space" as noticeably different from what we've been doing for the last 100 years or so. Any civilization close enough to hear a more directed plea, and advanced enough to intervene in any way, is already monitoring the Internet and knows whatever some subset of us could say.
Internet communications are mostly encrypted, so such a civilization would need to 1) be familiar with network protocols, particularly TLS, 2) inject its signals into our networks and 3) somehow capture a response.
I'm not sure our radio noise is powerful enough to be heard at interstellar distances. Something like the Arecibo message is much more likely to reach another solar system.
It could also be important to specifically send a call for help. A call for help indicates our explicit consent to intervene, which could be important for an advanced civilization that has something like a non-intervention rule.
Desiderata for an alien civ that saves us:
up to y'all whether the intersection of these criteria sounds likely
able to meaningfully fight a hard superintelligence
I am pretty sure that the OP's main source of hope is that the alien civ will intervene before a superintelligence is created by humans.
If a hostile alien civilization notices us, we’re going to die. But if we’re going to die from the AGI anyway, who cares?
Anyone with a p(doom from AGI) < 99% should conclude that harm from this outweighs the likely benefits.
Not sure about it. Depends on the proportion of alien civilizations that will cause more harm than good upon a contact with us. The proportion is unknown.
A common argument is that an interstellar civilization must be sufficiently advanced in both tech and ethics. But i don't think the argument is very convincing.
Even if the alien civilization isn't benevolent, they would probably have more than enough selfish reasons to prevent a superintelligence from appearing on another planet.
So the question is whether they would be technologically advanced enough to arrive here in 5, 10, or 20 years or whatever time we have left until AGI
An advanced civilization that isn't a superintelligence itself that's advanced enough would probably have faced an AI extinction scenario and succeeded, so they would probably stand a much higher chance of aligning AI than ourselves. But previous success aligning an AI wouldn't mean future success.
Since we are certainly more stupid than said advanced alien civilization, they would either have to suppress our freedom, at least partially or find a way to make us smarter or more risk-averse.
Another question would be whether s risk from an alien civilization is worse than s risk from superintelligence.
Let's assume that Eliezer is right: soon we'll have an AGI that is very likely to kill us all. (personally, I think Eliezer is right).
There are several ways to reduce the risk, in particular: speeding up alignment research and slowing down capabilities research, by various means.
One underexplored way to reduce the risk is active SETI (also known as METI).
The idea is as follows:
The main advantage of the method is that it can be implemented by a small group of people within a few months, without governments and without billions of dollars. Judging by the running costs of the Arecibo Observatory, one theoretically can rent it for a year for only $8 million. Sending only a few hundred space messages could be even cheaper.
Obviously, the method relies on the existence of an advanced alien civilization within a few light years from the Earth. The existence seems to be unlikely, but who knows.
Is it worth trying?