On a few different views, understanding the computation done by neural networks is crucial to building neural networks that constitute human-level artificial intelligence that doesn’t destroy all value in the universe. Given that many people are trying to build neural networks that constitute artificial general intelligence, it seems important to understand the computation in cutting-edge neural networks, and we basically do not.
So, how should we go from here to there? One way is to try hard to think about understanding, until you understand understanding well enough to reliably build understandable AGI. But that seems hard and abstract. A better path would be something more concrete.
Therefore, I set this challenge: know everything that the best go bot knows about go. At the moment, the best publicly available bot is KataGo, if you’re at DeepMind or OpenAI and have access to a better go bot, I guess you should use that instead. If you think those bots are too hard to understand, you’re allowed to make your own easier-to-understand bot, as long as it’s the best.
What constitutes success?
- You have to be able to know literally everything that the best go bot that you have access to knows about go.
- It has to be applicable to the current best go bot (or a bot that is essentially as good - e.g. you’re allowed to pick one of the versions of KataGo whose elo is statistically hard-to-distinguish from the best version), not the best go bot as of one year ago.
- That being said, I think you get a ‘silver medal’ if you understand any go bot that was the best at some point from today on.
Why do I think this is a good challenge?
- To understand these bots, you need to understand planning behaviour, not just pick up on various visual detectors.
- In order to solve this challenge, you need to actually understand what it means for models to know something.
- There’s a time limit: your understanding has to keep up with the pace of AI development.
- We already know some things about these bots based on how they play and evaluate positions, but obviously not everything.
- We have some theory about go: e.g. we know that certain symmetries exist, we understand optimal play in the late endgame, we have some neat analysis techniques.
- I would like to play go as well as the best go bot. Or at least to learn some things from it.
Corollaries of success (non-exhaustive):
- You should be able to answer questions like “what will this bot do if someone plays mimic go against it” without actually literally checking that during play. More generally, you should know how the bot will respond to novel counter strategies.
- You should be able to write a computer program anew that plays go just like that go bot, without copying over all the numbers.
Drawbacks of success:
- You might learn how to build a highly intelligent and capable AI in a way that does not require deep learning. In this case, please do not tell the wider world or do it yourself.
- It becomes harder to check if professional human go players are cheating by using AI.
Related work:
- The work on identifying the ‘circuits’ of Inception v1
- The case for aligning narrowly superhuman models
A conversation with Nate Soares on a related topic probably helped inspire this post. Please don’t blame him if it’s dumb tho.
AlphaGo was partly trained using human games as input, which I believe KataGo was as well.
But AlphaGoZero didn't use any human games as input, it basically 'taught itself' to play Go.
Seeing as how AlphaGo and KataGo used human games, which rely on integrating reasoning between local and global consideration, the development of the algorithms is different than that of AlphaGoZero.
Does AlphaGo rely on local patterns? Possibly, but AlphaGoZero? Where humans see a 3 phase game with maybe 320 moves, which gets broken down into opening, middle and end game, ko's, threats, exchanges, and so on, it seems likely AlphaGoZero sees the whole game as one 'thing' (and in fact sees that one game as just one variation in the likely billions of millions of trillions of games it has played with itself).
Even at AlphaGoZeros level though, I think considering local patterns is still probably a handy way to divide computation of an infinite set of games into discrete groups when considering the branches of variations; sort of the way that Wikipedia still uses individual headings and pages for entries, even though they could probably turn the entire contents of Wikipedia into one long entry. It would be very difficult to navigate if they did though.
I've heard stories of Go professionals from the classical era claiming it's possible to tell who's going to win by the 2nd move.