Late last year a LessWrong team was being mooted for the Google AI challenge (http://aichallenge.org/; http://lesswrong.com/r/discussion/lw/8ay/ai_challenge_ants/). Sadly, after a brief burst of activity, no "official" LessWrong entry appeared (AFAICT, and please let me know if I am mistaken). The best individual effort from this site's regulars (AFAICT) came from lavalamp, who finished around #300.
This is a pity. This was an opportunity to achieve, or at least have a go at, a bunch of worthwhile things, including development of methods of cooperation between site members, gathering positive publicity, and yes, even advancing the understanding of AI related issues.
So - how can things be improved for the next AI challenge (which I think is about 6 months away)?
What makes you think there's much better to be done? Some games or problems just aren't very deep, like Tic-tac-toe.
The winning program ignored a lot of information, and there weren't enough entries to convince me that the information couldn't be used efficiently.