Image taken from gogameguru.com
The last time, we explored the battle of Deep Blue vs Gary Kasparov here, and the conclusion was pretty bleak for humans in the battle against machine. Enter 2016, and once again man pits wit against algorithm over a different game – Go.
Chess vs Go
To the uninitiated, Chess might seem a more complex game than Go, with different pieces having different movements and rules, while Go can be reduced to placing stones on a board. Indeed, to humans this holds largely true, as the way our memory and normal learning methods function allows us to pick up Go much more easily as compared to Chess due to the smaller number of dimensions the game has in comparison.
The premise of Go is to capture opponent pieces by surrounding them and control as much of the board as possible at the end of the game, which is determined by when both players pass their turns in succession, indicating no more moves from either player.
My attempt at learning Go on a 9×9 board, three games in and I’m getting the hang of some general concepts. Spoiler alert, I lost this one too.
The way us humans interpret and learn to play games can be compartmentalized into two areas – logic and memory. As we iterate through games and losses, we identify specific plays that were good and commit them to memory. While this is easier to accomplish in smaller games with a low number of total moves such as tic-tac-toe, games of Chess and Go far exceed such numbers. This is where logic comes into place, where we assemble these moves into a coherent flow of interactions that forms a strategy.
Our brain’s working memory is wired to process 5-9 pieces of information at any one period of time, making it easy for us to grasp the intuition behind Go concepts while the increased number of pieces in Chess makes this more difficult. Simply put, understanding what has happened in a game of Go at a glance is comparatively easier as compared to that of Chess for us humans.
However, when it comes to computers who rely on generating potential outcomes to determine their moves, the two games are flipped in difficulty. In having an AI program tackle the game of Go, there were two major challenges, the first being the size of the 19×19 board and the second being the end state of the game not being definite.
The size of the board poses a huge numbers challenge. Expanding from a 9×9 board to that of a 19×19 exponentially increases the amount of possible moves and subsequent decision trees that demands an exceptionally high amount of computing power from the AI. This prevents the computer from looking too far ahead and being unable to come up with broad based tactical solutions on the board. Compress the size of the board down, and getting a computer to solve the game would be entirely possible. In 2002, a computer program called MIGOS (MIni GO Solver) completely solved the game of Go for a 5×5 board. Black won, taking the whole board. It has also been shown that Go programs do well on the beginner’s 9×9 board as well.
While Chess’s additional rules and piece diversity adds complexity to human learning of the game, it actually simplifies it for computers, allowing them to ignore illegal moves and narrow down the game into simpler structures that can be used.
The endgame of Go is perhaps the most difficult part to tackle from the AI’s perspective. While for humans it is simply recognition from both parties on the board state, the lack of a fixed condition in an endgame makes Go a big challenge for AI. While the game of Go is a finite game, there exists multiple endgame states and scenarios that often result in AI misplaying these situations as they are unable to pinpoint what endgame state they are playing towards, often over optimizing on a portion of the board without considering the possibility of an endgame.
AlphaGo had a lot going for it coming into its match with Lee Sedol, having had a big boost in computing power from technological advancements over the years. It utilized a Monte Carlo tree search, a means of dealing with large data sets through aggregation of best outcomes through random sampling (THIS EXPLANATION DOESN’T GET EASIER) as well as two machine learning neural networks.
Deciding that the fastest way would be to learn from the best, the system’s neural networks were initially developed from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.
Alphagoplayyourself! Image taken from dustmoon.com
v.s. Lee Sedol
Most would have expected a highly optimized Go AI tapping on expert knowledge, imitating their plays and then having the computing power to solve tactical questions when faced with complex decision trees that were open to interpretation, allowing it to optimize decision making and at most eke out victories against its opponent through marginally better decision making.
However what happened next proceeded to shock the world of Go. The AlphaGo program did not simply imitate and optimize. It began to create, and developed a new move that had never been used before. While the community was befuddled by the seemingly weird piece placement initially, they began to make sense in hindsight. AlphaGo was not simply winning, it was schooling one of the game’s masters.
Human genius, however, would not be so easily defeated, with Lee Sedol continuously adapting to AlphaGo as the game progressed along, even scoring a victory in the 4th game of the series. Test the AI he did, and he came close to taking the 5th game of the series. The victory of machine over man in this encounter was resounding, but it was not complete.
What does this mean?
With Moore’s law resulting in an increase in computing power every year, the victory of machine over man in fixed games such as Go was a matter of eventuality. The amazing thing about AlphaGo was that its system was built similar to that of general purpose AI, unlike that of the chess programs that have been dominating that field for the past decade. AlphaGo could be as easily trained to take on any other game or general purpose using its system being fed the right data.
Solutions to dealing with big data in all sorts of networked systems have become the big challenge in the latest decade. As much as our processing power has been able to help us gather and translate such data, the challenge often lies in interpretation. AlphaGo might very well be one of the foundational blocks as to how AI systems might be designed and applied to dealing with large datasets with complex decision trees. Combining neural networks with Monte Carlo Search Tree methods to get AI to learn how to create its own guiding structures in sifting through large amounts of data might help reduce the hubris of reliance on raw correlation in current big data methods. AI might soon be able to approximate causation.
For us humans, as resounding as the defeat was, the match perhaps could be an indication of what lies ahead in the future of Go. The ability of AlphaGo to develop new moves and have Lee Sedol adapt valiantly to them is only testament that the game of Go has not been solved yet. While Chess players have been able to enjoy fairly competent AI as training aids and practice, AlphaGo was a leap in power and capability that had not been seen previously. A rematch with AlphaGo might yield a different result with time given for top Go players to utilize similar AI to develop their own game and figure out exploits in the way the AI works.
The battle of Man vs Machine is bound to continue in Go, this time pitting human’s ability to learn and innovate against the neural networks of machine learning. While trends from other battles in Man vs Machine in other games suggest a bleak outcome, there still is hope for Go.
About the Author
As a competitive gamer, Jensen’s personal field is the study of winning. As a Shoutcaster for Garena League of Legends, Jensen loves to discuss the E-sports industry: how is it perceived? And how does it interact with our society? He is also a firm believer that competitive gaming will be recognized in the future. Trust him, he’s an engineer.