Aichallenge.org has started their third AI contest this year: Ants.
The AI Challenge is all about creating artificial intelligence, whether you are a beginning programmer or an expert. ... [Y]ou will create a computer program (in any language) that controls a colony of ants which fight against other colonies for domination. ... The current phase of the contest will end December 18th at 11:59pm EST. At that time submissions will be closed. Shortly thereafter the final tournament will be started. ... Upon completion the contest winner will be announced and all results will be publically available.
Ants is a multi-player strategy game set on a plot of dirt with water for obstacles and food that randomly drops. Each player has one or more hills where ants will spawn. The objective is for players to seek and destroy the most enemy ant hills while defending their own hills. Players must also gather food to spawn more ants, however, if all of a player's hills are destroyed they can't spawn any more ants.
I mentioned this in the open thread, and there was a discussion about possibly making one or more "official" LessWrong teams. D_Alex has offered a motivational prize. If this interests you, please discuss in the comments!
Yes.
This is a statement about your mind (ok, human minds), not about minds in general. There's no law saying that minds can't have multiple simultaneous trains of thought.
A unified mind can always simulate separate agents. Separate agents cannot simulate a unified mind. If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
There's a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
(I might not respond again, I have a bot to write!)
To within its available resources, sure. But that's under the assumption that there's a categorical difference between multiple agents instantiated on separate hardware and multiple agents instantiated on a single piece of hardware.
Actually, that's the entire notion behind the Society of the Mind: there's no such thing as a "unified mind". Only separate agents that operate as a wholistic system.
... (read more)