Let's assume that Eliezer is right: soon we'll have an AGI that is very likely to kill us all. (personally, I think Eliezer is right).
There are several ways to reduce the risk, in particular: speeding up alignment research and slowing down capabilities research, by various means.
One underexplored way to reduce the risk is active SETI (also known as METI).
The idea is as follows:
- Send powerful radio signals into space: "guys, soon we'll be destroyed by a hostile AGI. Help us!" (e.g. using a language constructed for the task, like Lincos)
- If a hostile alien civilization notices us, we're going to die. But if we're going to die from the AGI anyway, who cares?
- If a benevolent alien civilization notices us, it could arrive in time to save us.
The main advantage of the method is that it can be implemented by a small group of people within a few months, without governments and without billions of dollars. Judging by the running costs of the Arecibo Observatory, one theoretically can rent it for a year for only $8 million. Sending only a few hundred space messages could be even cheaper.
Obviously, the method relies on the existence of an advanced alien civilization within a few light years from the Earth. The existence seems to be unlikely, but who knows.
Is it worth trying?
It actually may work. But not because aliens will come to save us - there is no time. But because any signal we send in space will reach any star before intelligence explosion wave, and thus aliens may know potentially hostile nature of our AI.
Our AI will know all this, and if it wants to have better relation with aliens, it may invest some resources in simulating friendliness. Cheap for AI, cheap for us.