I honestly don't know and thinking about this fills me with despair. Every good solution requires time and that seems to be the main thing we're short of.
Has anyone done serious research into what it would take to slow down progress in the field of AI? Could we just ban hardware improvements for a couple of decades and place a worldwide cap on total compute? I realize this would be incredibly unpopular and would require a majority of the world's population to understand how dangerous powerful AI will be. But one would think FLI or someone else would have started doing research into this area.
Consider that "if AGI is very near" probably means that it's already happened (or, equivalently, that we are past the point of no return) on Copernican grounds, since the odds of living in a very special moment where the timelines are short but it's not too late yet are very low. Not seeing an obvious AGI around likely means that either it's not very near, or that the take-off is slow, not fast.
Ironically, it's not Roko's basilisk that is an infohazard, it's the "AGI go foom!" idea that is.
I want to clarify that "AGI go foom!" is not really concerned with the nearness of the advent of AGI, but with whether AGIs have a discontinuity that results in an acceleration of the development of their intelligence over time.
I don't understand how the Copernican argument works. Being alive in the moment before the first AGI exists is very unlikely, but surely it is very unlikely to be alive in any moment around the development of AGI in general. If anything, you could possibly argue that it's more likely we are in some kind of simulation than in base reality right before AGI takeoff. If that's not the point you're making, could you restate the argument?
Consider the following observations:
Questions:
More serious thought needs to be given to this, to solemnly consider it as a looming possibility.