AI Impacts has a list of reasons people give for why current methods won't lead to human-level AI. With sources. It's not exactly what you are looking for, but it's close, because most of these could be inverted and used as warning signs for AGI, e.g. "Current methods can't build good, explanatory causal models" becomes "When we have AI which can build good, explanatory causal models, that's a warning sign."
In MIRI's March newsletter, they link this post which argues against the importance of AI safety because we haven't yet achieved a number of "canaries in the coal mines of AI". The post lists:
What other sources identify warning signs for the development of AGI?