I'm up for joining any LessWrong team; I'm most comfortable with Python and C++ (or Javascript, but apparently the package is incomplete); I started looking around at the rules and technical details.
(I heard about it before, and while it's interesting, I don't have massive amounts of free time; participating in a team sounds more interesting)
If enough people are interested, we may want to split teams by language (LessWrongSlitherin for Python, LessWrongRavenclaw for Haskell, LessWrongGriffindor for C++ and LessWrongHufflepuff for PHP, or something).
I think we should talk about strategy and see if some teams naturally form. Language is secondary. The thing that's holding me back atm is finding a good way to come up with moves in local fights. (I believe I do have a good way to figure out where the local fights are, though.) Local fights between just a few ants ought to be able to be alpha-beta'd, but mass battle is a different story completely.
Pathfinding to food and exploring are not too hard to do decently. I think good local fighting is what will distinguish the top bots.
As for me, I know C++ and Go (google's newish language) quite well. Last year my bot was in the top 10% but not quite the top 5%. I'm 2d at Go (the game), if that counts for anything.
Considering that I'm new at the game, I'm a bit fuzzy about which strategies work and which ones are dead ends.
My first instinct would be to try simple strategies (cluster with my ants, follow walls, avoid my ants, just charge straight forwards, etc.) and then use a mixed strategy whose parameter I can vary, possibly function of simple environment metrics (so I can systematically explore one bit of the space of possibilities).
But I haven't read through all the site yet, and didn't even get my local version to run correctly either (I need to use the right version of Python or something, the bots all crash). So my understanding of the kind of strategies involved is still very detached from reality.
Watch some of the games of the top bots. One thing I haven't seen a top bot do yet is identify situations where they have an impasse (e.g., three ants holding off an entire horde) and try another route.
I've been doing that, it's interesting indeed, thanks : I probably won't have the time to make a good enough bot to compete with the top of the top, but I'll do what I can.
I would not expect to win without a method that I could see to approximate or to exceed linear-programming formulations of multi-agent path planning, for example here (2005/2008) or here (2001). Those methods don't fully cover the stochasticity of resource gathering or the adversariality of combat, but they partly cover resource-finding and patrolling (when up-to-date visual information about unknowns is an objective) and deployment, and they provide a framework into which to fit tradeoffs among options evaluated by more specialized models, such as for combat, exploration, or gathering.
One approximation would be a relaxation) of the ant indicator function to a basis) of wavelet-like vectors (for example diffusion kernels) on the graph of connected land squares. (The basis vectors should overlap smoothly, because under a rigorous relaxation, non-overlapping basis vectors would allow approximated ants to teleport across the support) of basis elements.)
Lots of food for thought there. Clearly I didn't have enough math in college.
For global ant allocation and food gathering, it's very possible that the approach you're describing is the best you could practically do. For more local combat situations, though, I don't think that's clear at all-- if I had infinite computing resources, I'd be using an appropriate adaptation of minimax.
Anyway, you gave me some words for concepts and some new concepts, so upvote for you.
This sort of thing seems more like a test of the programmers' ability to analyse the problem than anything to do with general/learning AI. It could still be good practice though.
Several posts so far bring up proficiencies in various programming languages.
All else being equal, maybe go with Python, since Less Wrong is written in Python.
Perhaps someone will find inspiration to put their newly brushed up skills to use on other projects.
OK, I got everything running, wrote a simple bot that's better than the default ones - I had the most trouble figuring out how to get crash callstacks.
My next step is using a high-level grid to control the behavior of my ants (have big cells, keep track of how many ants are in each cell, use this to redirect ants to empty cells and to attract ants to cells with combat. I already did similar things in the past, shouldn't be too complicated).
I may also experiment a bit with coordinating ant movement during fight - if I'm in a cell with a fight, get all my ants to move forward simultaneously or something.
I had thoughts along these lines, but I was thinking that in order to be effective the cells would need to be dynamically sized, and I didn't see an obviously good way to do it (although I since ran across http://en.wikipedia.org/wiki/Local_outlier_factor -- may be of some interest). I'll be curious to know how this turns out for you.
My bot with cell wasn't very effective, and was getting complicated to modify, so I started anew with a clean, modular architecture and a way of combining different strategies, each of which is pretty simple. That gets me an effecctiv enough bot with room for improvement - most of the ants just scatter by avoiding walking on frequently-visited squares, and those that are near food and hives attack.
Now that I have that, I can start adding some more complicated pathfinding strategies, some optimisation, etc. - macro cells will probably be useful again, for optimisation and high-level pathfinding. But now I can fit them into my framework without my code getting all ugly and unwieldy.
(I'm still somewhat annoyed about how my first bot, which just goes straight and turns 5% of the time or when it hits a wall, keeps outperforming smarter attempts. Well, at least now I'm regularly beating it.)
I was just now going to work on this, although I planned on making an simple-but-maybe-interesting bot rather than a really good bot (specifically, I was thinking about making one that combined straightforward pathfinding and some reasonably smart local fighting ability with a strategy where I remember the median size of enemy ant "groups" and move ants near enemy territory only in groups of size N+1. I haven't talked to anyone about strategy, but after looking at the rules, I want to see how this does.)
I would have written this bot in Clojure on my lonesome, but I have sufficient expertise in C# or Javascript just the same, so if anyone is interested then let me know and we can work together instead!
So, any progress? Would you prefer to work as a team and just submit one team bot, or to just register as individuals under a "common flag", and mostly share strategy and analysis? I'm open to both.
(I'm doing Python for now, I don't have much expertise in C#, though I do have a bit with javascript)
This sounds fascinating and I'd love to help contribute, but I don't know anything beyond a cursory knowledge of C++, Visual Basic, and PHP...
The AI Challenge is all about creating artificial intelligence, whether you are a beginning programmer or an expert ..
As I said earlier:
Yes I think it should be, I think there would be some interest. Considering we have some very competent and experienced people on Lesswrong and some very enthusiastic amateurs, several teams wouldn't be too bad an idea either if there where enough people. Some of the amateur LWers might be a bit intimidated by being part of the "Offical LessWrong team", whereas "Lesswrong Team #3" or "Lesswrong amateur rationalist group" dosen't sound as bad.
Regardless of putting LW's name out there and being pretty fun, I think it is a pretty decent applied rationality.
Yeah, I would be pretty intimidated by "Official LessWrong team". I'm not sure I even really qualify as a "beginning" programmer - I usually have to look up the syntax for something as simple as a while loop, unless it's in VBA, which I use quite a bit because of its interface with Excel. And I don't really feel like I have any great specific skills that would contribute.
But that being said, if there's a group, I'd love to participate/observe/learn!
I don't know that I feel qualified and/or have enough time to run a team, but if you want to chat in real time about this feel free to PM me and I'll give you an IM handle or something.
Given how much success actual ant-colonies have with the use of bottom-up spontaneous organization, I find it interesting that this hasn't already been discussed here.
"Thanks for the perspective on AI, ants. Thants."
More seriously, I find emergent intelligence a very interesting phenomenon. For example; while no single ant has the capacity to comprehend the scale of a river, they can build bridges out of leaves and fellow ants to cross said river, a clear act of 'problem-solving' that is beyond the abilities of individual ants, yet arises as a result of the interactions of the various ants with each other and their environment.
Eusocial organisms collective behaviors in general are very interesting to me, especially when stigmergy causes manifest behaviors the individual organisms cannot achieve. Mobile siphonophores for an additional example.
I think we still have a great deal to learn about how general intelligence can be achieved through simple systems from eusocially stigmergic organisms.
I find it quite hard to believe you couldn't do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game). A single mind can, worst case, simulate the rules each ant follows, so it can never be worse than the social behavior is. But the ants individually can't simulate a single large mind (for one thing, they wouldn't have all the information it would have).
It'd be like writing a chess engine by writing a different AI for each piece. Splitting up the AI gives you more problems, not less.
(That's not to say that you couldn't evolve a good set of local rules to follow in this game, of course!)
I find it quite hard to believe you couldn't do even better if you were a single mind perceiving what the ants did and controlling them
Sorry for the additional response to the same post, but I feel this bears special notice, as I just now realized that we might be talking "past" one another.
IF the purpose of the endeavor isn't just to "do better", but rather to learn about how intelligence and cognition operate, then it seems to me that examining a real-world manifestation of intelligence (and even cognition in the form of observing an environment and reacting to it with intent and even "instictive" tool-use [such as harvesting leaves to build a bridge to cross a river]) in a radically alternative substrate than "simple" neurology should be taken as an opportunity to learn more about how, in principle, cognition and intelligence "work".
In other words: ant colonies as collectives are a form of "unit of cognition" (in the manner tha a fly or a rat or a human each represent a "unit of cognition") -- but "ant colonies" do not have brains. The act of "figuring out how to handle this river between us and our goal" never occurs in any particular ant or even specific set of ants within a colony. I find this fact fascinating and believe it is a deeply under-explored avenue towards understanding cognition and intelligence.
I find it quite hard to believe you couldn't do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game).
You heard about the Berkeley Overmind? A single mind has a limited amount of capacity to focus on events simultaneously. Emergent intelligence does not.
But the ants individually can't simulate a single large mind (for one thing, they wouldn't have all the information it would have).
Given the total neural processing power available to the ants, I'd dare say that their capacity to solve problems is far greater than you're giving them credit for. Also, there's a non-trivial chance that this is already how individual minds operate -- I'm speaking of the Society of Mind hypothesis.
You heard about the Berkeley Overmind?
Yes.
A single mind has a limited amount of capacity to focus on events simultaneously.
This is a statement about your mind (ok, human minds), not about minds in general. There's no law saying that minds can't have multiple simultaneous trains of thought.
Emergent intelligence does not.
A unified mind can always simulate separate agents. Separate agents cannot simulate a unified mind. If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
There's a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
(I might not respond again, I have a bot to write!)
A unified mind can always simulate separate agents.
To within its available resources, sure. But that's under the assumption that there's a categorical difference between multiple agents instantiated on separate hardware and multiple agents instantiated on a single piece of hardware.
Separate agents cannot simulate a unified mind.
Actually, that's the entire notion behind the Society of the Mind: there's no such thing as a "unified mind". Only separate agents that operate as a wholistic system.
If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
I believe there is a significantly false assumption here: that the agents present in human minds are operating with "simultaneous" (or otherwise) access to "the same information".
Furthermore -- the entire concept of stigmergy rests upon the notion that each of these independent agents would produce effects that alter the behaviors of the other independent agents, thus creating an operationally unified whole.
There's a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
I submit to you that the differences are more of distance (proximity of computational units) and magnitude ( overall level of currently manifested intellect), and far less of category. That is; I see nothing about the behaviors of siphonophores and ants -- as eusocial collectives -- which prevents in principle a eusocial collective from manifesting tool-making level intellect and/or sentience.
I find it quite hard to believe you couldn't do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game).
...
If the separate agents all have simultaneous access to the same information ... then they cease being separate agents ... .
There's a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
I believe there is a significantly false assumption here: that the agents present in human minds are operating with "simultaneous" (or otherwise) access to "the same information".
To me, that reads as if lavalamp doesn't think humans actually are a "unified mind", though. It's the program written in the context of the game that acts as a single agent by processing the same information with pseudo-'simultaneity'.
It's the program written in the context of the game that acts as a single agent by processing the same information with pseudo-'simultaneity'.
I believe I understand what you are saying here. I just don't think it fairly describes what lavalamp was saying.
My reading of that passage is that his assertion was that humans, by having separate agents all running "in one brain", cease being separate agents as a result of "having simultaneous access to the same information".
EDIT: Okay, now I find myself confused. By the course of the dialogue it's clear that pedanterrific did not downvote my comment, so someone not replying to it must have. I am left without insight as to why this was done, however.
Well, if you're correct and that is what lavalamp is asserting, I pretty much agree with you. Humans are definitely not "unified minds", and the difference between separate agents running on one or multiple brains may be large, but it's quantitative, not qualitative.
That is, even separate agents running on one brain will never have simultaneous access to the same information (unless you cheat by pausing time).
That is, even separate agents running on one brain will never have simultaneous access to the same information (unless you cheat by pausing time).
Even then it's important to note that various agents operating on varying principles of how to transform / relate to information might only be "capable" of noting specific subsets of "the same information", and that this is -- I believe -- contextually relevant to comparing brains to ant colonies. Just like how the parts of your brain that handle emotions will not be involved in processing the differences between two sounds; two ants each in different locations have access to separate subsets of information which is then relayed to other parts of the colony.
The emotion-parts react to the signals sent by the sound-parts, and vice versa; so to ant(1) reacts to the signals sent by ant(2), and vice versa.
I'm not the one downvoting you, either.
I'm not so sure our anticipations necessarily differ. I think separate agents with amazingly fast communication will approach the performance of a unified mind, and a mind with poor internal communication will approach the performance of separate agents. Human minds arguably might have poor internal communication, but I'm still betting that it's more than one order of magnitude better than what ants do. I think our disagreement is more about the scale of this difference than anything.
The fundamental barrier to communication inside a single mind is the speed of light; an electronic brain the size of a human one ought to be able to give its sub-agents information that's pretty damn close to simultaneous.
At any rate, in this game we do have simultaneous knowledge, and there's no reason to handicap ourselves by e.g., waiting for scouts to return to other ants to share their knowledge.
I believe there is a significantly false assumption here: that the agents present in human minds are operating with "simultaneous" (or otherwise) access to "the same information".
I think it is true for games with turns like this one.
I think it is true for games with turns like this one.
I am not aware of any mechanism which might cause this to be a meaningful difference. Enlighten me?
'Simultaneity' is easy to achieve when the environment changes in discrete intervals with time to think in between.
Edit: What lessdazed said.
'Simultaneity' is easy to achieve when the environment changes in discrete intervals with time to think in between.
The appearance of "simultaneity", sure. But that's a manifestation of the difference between real-time and turn-based 'games', and not a characteristic of cognition that is meaningfully significant. (At least, not so far as I can tell.)
I'd say the implication that it's only actually possible to act as a "unified mind" in certain highly artificial non-realtime circumstances is pretty significant.
But if I am correct that it is only the appearance of acting as a "unified mind", then... there's no real significance there, as it again is simply a characteristic of the medium rather than of the function. In other words, this "unification" is only present in a turn-based game, and only manifests as a result of the fact that turn-based games have 'bots' whose intellect necessarily manifests during the turn.
This, in kind, would "compress" the actual processes of cognition into what would appear to be a "unified/simultaneous" process.
And this is why I say that it is not a characteristic of cognition which is meaningfully significant. It's telling us something about turn-based games -- not about cognition.
Allow me to slightly rephrase my point: I'd say the implication that it's impossible to act as a "unified mind" in realtime is pretty significant.
Even if/as there is no such thing as simultaneity in consciousness, in a game with rules like this thoughts can be neatly divided into "after seeing the results of turn one, and before deciding what to do on turn two," and that is all that is important.
What I said was badly phrased: the assumption isn't true, but if it is being made, that is irrelevant.
in a game with rules like this thoughts can be neatly divided into "after seeing the results of turn one, and before deciding what to do on turn two," and that is all that is important.
I don't know as to how that maps to "simultaneous access to the same information", however, in any computationally significant sense. It's simply a characteristic of the definition of turn-based as opposed to real-time 'games' that you do your processing between turns but during real-time.
I might be interested in that. I do study both AI in general and AI for video games, though on those subjects I'm rather a beginner. I'm capable of programming very proficiently in Javascript, Perl, C, and some dialects of Lisp without any refresher. Have in the past been decent with Pascal, PHP, Basic, Visual Basic, C++, Java, and some other dialects of Lisp. Occasionally poked at Ruby, Python, Haskell, and a few others. And I've had some success at making domain-specific languages that compile down to Perl.
Since I didn't mention this in the previous thread:
Programming:
Related skills:
Working in Python I'm sure I can hold my own, I am much less sure I can do so in Haskell but I'm currently working on taking my skills in that language to the next level, so it wouldn't be that bad. Any of the C stuff, I think I would probably be a drag, though I used to be good at it, it has been a while since I've written anything in it.
Seems like a lot of people are interested in contributing to a team but no one is interested in running one...
Why, thank you for volunteering :D
I'm still in "examining the problem" mode; until I have a better idea of the amount of work needed, how much that matches my skills, how much can be split up, ... discussing organization feels premature to me.
Also: "a lot"? I'm seeing you, me, Konkvistador, maybe thomblake and RobertLumley.
Aichallenge.org has started their third AI contest this year: Ants.
I mentioned this in the open thread, and there was a discussion about possibly making one or more "official" LessWrong teams. D_Alex has offered a motivational prize. If this interests you, please discuss in the comments!