I can try. Or, at least give a sketch. (Hand-waving ahead ...)
The Ants problem -- if I'm understanding it correctly -- is a problem of coordinated action. We have a community of ants, and the community has some goals: collecting food, taking over opposing hills, defending friendly hills. Imagine you are an ant in the community. What does rational behavior look like for you?
I think that is already enough to launch us on lots of hard problems:
What does winning look like for a single ant in the Ants game? Does winning for a single ant even make sense or is winning completely parasitic on the community or colony in this case? Does that tell us anything about humans?
If all of the ants in my community share the same decision theory and preferences, will the colony succeed or fail? Why?
If the ants have different decision theories and/or different preferences, how can they work together? (In this case, working together isn't very hard to describe ... it's not like the ants fight themselves, but we might ask what kinds of communities work well -- i.e. is there an optimal assortment of decision theories and/or preferences for individuals?)
If the ants have different preferences, how might we apply results like Arrow's Theorem or how might we work around it?
...
So, there's a hand-wavy sketch of what I had in mind. But I don't know, is it still too vague to be useful?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don't think that changes the problems in principle, anyway. But maybe I'm missing something there.
The Ants problem -- if I'm understanding it correctly -- is a problem of coordinated action.
One of the interesting aspects of the winning entry post-mortem is the description of how dumb and how local the basic strategy the winner used:
...There’s been a lot of talking about overall strategies. Unfortunately, i don’t really have one. I do not make decisions based on the number of ants i have or the size of my territory, my bot does not play different when it’s losing or winning, it does not even know that. I also never look which turn it is, in the first
Late last year a LessWrong team was being mooted for the Google AI challenge (http://aichallenge.org/; http://lesswrong.com/r/discussion/lw/8ay/ai_challenge_ants/). Sadly, after a brief burst of activity, no "official" LessWrong entry appeared (AFAICT, and please let me know if I am mistaken). The best individual effort from this site's regulars (AFAICT) came from lavalamp, who finished around #300.
This is a pity. This was an opportunity to achieve, or at least have a go at, a bunch of worthwhile things, including development of methods of cooperation between site members, gathering positive publicity, and yes, even advancing the understanding of AI related issues.
So - how can things be improved for the next AI challenge (which I think is about 6 months away)?