Computers don't have any sense of aesthetics or patterns that are standard the way people learn how to play chess. They play what they think is the objectively best move in any position, even if it looks absurd, and they can play any move no matter how ugly it is." - Murray Campbell quote about DeepBlue
Vinge's principle states: "we usually think we can't predict exactly what a smarter-than-us agent will do, because if we could predict that, we would be that smart ourselves".
A popular idea think this means that AGI would invent and use new technology such as nanorobotics to defeat us (this is the example Yudkowsky usually gives).
However, this doesn't seem to jive with what happens in other domains where AI becomes superhuman. Usually what the AI does is understandable to humans. It's just that it looks, well, dumb.
For example, in chess, computers use roughly the same piece evaluation that humans discovered in the 18th century, didn't discover any new openings, and generally seemed to play ugly moves. But they won anyways.
If something like nanorobotics lets you take over the world, you'd expect a human group to be trying to create them to take over the world already because it seems to make sense. In reality, any plan that (for example) relies on DNA as a stepping stone will quickly run into regulatory problems.
Instead, I imagine that the AGI's plan will elicit similar reactions as the following:
You get a phone call advertising free Mayonnaise! You just need to follow a couple simple steps. The next day, you're confused and in some sort of Mayonnaise cult breaking into a military armory in Mexico.
Is this plan something that humans can try? No, it seems pretty straight forward to attempt. So why haven't we tried it? Because it seems and likely is dumb. Why mayonnaise? Why a phone call? Why Mexico?
But if AGI is similar to other superhuman AI, this is the type of thing we expect to see; a strategy that looks dumb but works. We have no way to predict which dumb strategy will be used, but given the large number of strategies that look dumb to humans, the AGI's strategy is likely to be one of them. And it has enough Yomi to predict which one will succeed.
Thomas Griffiths' paper Understanding Human Intelligence through Human Limitations argues that the aspects we associate with human intelligence – rapid learning from small data, the ability to break down problems into parts, and the capacity for cumulative cultural evolution – arose from the 3 fundamental limitations all humans share: limited time, limited computation, and limited communication. (The constraints imposed by these characteristics cascade: limited time magnifies the effect of limited computation, and limited communication makes it harder to draw upon more computation.) In particular, limited computation leads to problem decomposition, hence modular solutions; relieving the computation constraint enables solutions that can be objectively better along some axis while also being incomprehensible to humans:
(Speedruns are another relevant intuition pump.)
This is why I don't buy the argument that "in the limit, superior strategies will tend to be beautiful and elegant", at least for strategies generated by AIs far less limited than humans are w.r.t. time, compute and communication. I don't think they'll necessarily look "dumb", just not decomposable into human working memory-sized parts, hence weird and incomprehensible (and informationally overwhelming) from our perspective.
Since the topic of chess was brought up: I think the right intuition pump is endgame tablebase, not moves played by AlphaZero. A quote about KRNKNN mate-in-262 discovered by endgame tablebase from Wikipedia: