Demis Hassabis has already announced that they'll be working on a Starcraft bot in some interview.
This interview, dated yesterday, doesn't go quite that far - he mentions Starcraft as a possibility, but explicitly says that they won't necessarily pursue it.
...If the series continues this way with AlphaGo winning, what’s next — is there potential for another AI-vs-game showdown in the future?
I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are obviously all sorts of video games that humans play way better than computers, like StarCraft is another big game in Korea as well. Strategy games require a high level of strategic capability in an imperfect information world — "partially observed," it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.
Is beating StarCraft something that you would personally be interested in?
Maybe. We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and excitin
Almost any game that their AI can play against itself is probably going to work. Except stuff like Pictionary where it's really important how a human, specifically, is going to interpret something.
I know a little bit about training neural networks, and I think it would be plausible to train one on a corpus of well-played StarCraft games to give it an initial sense of what it's supposed to do, and then having achieved that, let it play against itself a million times. But I don't think there's any need to let it watch how humans play. If it plays enough gam...
RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.
I don't expect to see highly sophisticated AI in games (at least adversarial, battle-it-out games) because there is no point. Games have to be fun which means that the goal of the AI is to gracefully lose to the human player after making him exert some effort.
You might be interested in Angband Borg.
I don't expect to see highly sophisticated AI in games (at least adversarial, battle-it-out games) because there is no point. Games have to be fun which means that the goal of the AI is to gracefully lose to the human player after making him exert some effort.
I'm not sure about that. A common complaint about these kinds of games is that the AI's blatantly cheat, especially on higher difficulty levels. I could very well see a market for an AI that could give the human a challenge without cheating.
May have been a vocal minority. You get some people incorrectly complaining about AI cheating in any game that utilizes randomness (Civilization and the new XCOMs are two examples I know of); usually this leads to somebody running a series of tests or decompiling the source code to show people that no, the die rolls are actually fair or (as is commonly the case) actually actively biased in the human player's favor.
This never stops some people from complaining nonetheless, but a lot of others find the evidence convincing enough and just chalk it up to their own biases (and are less likely to suspect cheating when they play the next game that has random elements).
Make the AI control a robot that looks at a physical screen and operates a physical mouse. Then it will be fair. ;)
Is Alphabet stock a good proxy for owning a piece of DeepMind? Alphabet hasn't gained much at all since AlphaGo started winning. Maybe a few percent, but within the normal fluctuations. Of course this might be because all the smart money knew AlphaGo was going to win.
I propose a game where there are resources to be identified (using these DNN computer vision algorithms), collected, and deposited at drop-off points. To advance embodied cognition, players get small robot drones of some sort, perhaps like a roomba with a robot arm attached.
The resources include dirty socks and plates, and the game is called "tidy skeptical_lurker's house, because he can't be bothered"
Why isn't it obvious?
I know what I'd do.
Run the algorithm on the Bitcoin market, and then on the stock market.
They've successfully trained related AI's to play retro games, I believe including some with non-perfect information.
links to code etc in the youtube video description.
Computers can play one-on-one Limit Hold 'em pretty close to "perfectly"; a very good approximation to the Nash equilibrium strategy has been computed, and computers can follow it. The standard tournament game of no-limit 8-player Hold 'Em is a lot more computationally intensive to solve, though, and I don't think computers are especially good at it.
What about chess? See if a DNN based AI beats a conventional chess AI running on the same processor power. Many people are interested in chess, and if it could push forwards chess theory, then that would be very interesting.
Here is something I'd like to see: You give the machine the formally specified ruleset of a game (go, chess, etc), wait while the reinforcement learning does its job, and out comes a world-class computer player.
Collectible card games is interesting to me. You get the imperfect information of poker, as well as a deckbuilding component that it seems like the AI should be good at (build a bunch of decks, play itself a few million times).
Personally, I''m waiting for an AI that can outperform experts in Fantasy Football.
No small feat either. The sheer amount of data that needs to be processed is tremendous (think about all of the physical possibilities across all the football teams/games). Humans have the benefit of heuristics. Chess and Go are one thing. But being able to draft a winning fantasy team is a lot harder than it seems.
So chess and Go are both games of perfect information. How important is it for the next game that DeepMind is trained on to be a game of perfect information?
How would the AI perform on generalized versions of both chess and Go? What about games like poker and Magic the Gathering?
How realistic do you think it's possible to train DeepMind on games of perfect information (full-map-reveal) against top-ranked players on games like Starcraft, AOE2, Civ, Sins of a Solar Empire, Command and Conquer, and Total War, for example? (in all possible map settings, including ones people don't frequently play at - e.g. start at "high resource" levels). How important is it for the AI to have a diverse set/library of user-created replays to test itself against, for example?
I'm also thinking... Shitty AI has always held back both RTS and TBS games.. Is it possible that we're only a few years away from non-shitty AI in all RTS and TBS games? Or is the AI in many of these games too hard-coded in to actually matter? (e.g. I know some people who develop AI for AOE2, and there are issues with AI behavior in the game being hard-coded in - e.g. villagers deleting the building they're building if you simply attack them).