The future unfolds on a 2,500-year-old board

Published: Wed, 03/16/2016 - 12:20

The defeat of the world’s best player of Go at the hands of a machine is a seminal event for many reasons, but let’s not get ahead of ourselves

Dhruba Basu Delhi 

Fascinating things have been happening in Seoul, things that thankfully do not involve Psy or their friendly neighbourhood dictator Kim Jong Un. The South Korean capital has just wrapped up a contest that is expected, by all accounts, to have significant implications for the human race. 

Facing off to decide our fate were 33-year-old Lee Sedol, the world’s top player of Go, and AlphaGo, a program designed by Google DeepMind for the express purpose of besting humans at the Chinese board game. Three games in, Sedol, who had expressed confidence about his prospects, was three games down, and the horizon of AI possibilities were believed to have opened up in a way that was not thought imminently possible until as recently as 2014. 

Sedol came back strongly in the fourth game to secure his first victory on Sunday, March 13, but followed that by losing the last game yesterday, and consequently the match (1-4). 

The match was revelatory and hugely engrossing for the 60-million-or-so followers of what is regarded by some as possibly the world’s oldest board game. AlphaGo, which also enacted a 5-0 whitewash of European champion Fan Hui in October last year (the first time a computer had beaten a human being at Go, but European players are considerably less skilled than their Asian counterparts from China, Japan and South Korea), consistently made moves that baffled experts, commentators and of course Sedol himself, while he had to completely revamp his strategy to force the program to resign for his consolation win. 

The fact of the matter is that, while Go is, like chess, a game of mental strategy between White and Black pieces, it is different from and far more complex than chess in the fundamental nature of its gameplay

Much of the intrigue is nevertheless lost on those who either do not know of or do not follow Go; for them, the event’s value lies elsewhere. 

It is reasonable to wonder why a match between man and machine should assume such importance in 2016, given that chess programs have been beating players since the 1980s and have long been playing at levels that are unmatched in the history of human chess, not to mention seminal developments on the Jeopardy and video game fronts, and that computers might very soon be doing our driving for us. The fact of the matter is that, while Go is, like chess, a game of mental strategy between White and Black pieces, it is different from and far more complex than chess in the fundamental nature of its gameplay: it is played on the intersection points of a 19x19 grid, with the aim of achieving territorial domination, and without the kinds of restrictions on piece movement that characterise chess, to the widely contended effect that there are more possible positions in Go than there are atoms in the universe. This aspect of Go is truly game-changing (pun intended), because it necessitates artificial intelligence, as distinct from and more than simply the high order of computing power that was seen, for instance, in Deep Blue. 

Let’s break this difference down. With chess programs like Deep Blue, the idea is that the computer will move by comparing all possible responses and ensuing positions, and selecting the one that yields the best results as per the rules and objectives of chess. The more exhaustive the search, the better the move. In the case of Go, the search space for this kind of ‘brute force’ operation is immeasurably vaster, rendering the method inefficient and the game not merely analytical but intuitive as well. Advantage humans? 

The promise that this kind of program holds is best envisioned as a marriage between the memory and calculative powers of a computer and the intellectual responsiveness and dynamism of a human being, amounting almost to an all-purpose problem-solving mechanism

Enter DeepMind, the British AI company Google acquired in 2014. To address these seemingly intractable complexities, they developed a software that plays out the game from any given position millions of times, evaluates the resultant positions and chooses the best move. How is this different from brute-force search? In two all-important ways: firstly, the number of options that must be simulated and evaluated is limited by predictive intelligence, i.e. the computer predicts responses on the basis of its knowledge of many millions of human games; secondly, it does not merely react or mimic, but actively strategises, i.e. it devises and improves its own strategies by playing itself, and uses this knowledge to evaluate positions. 

These operations are performed by two deep neural networks: the policy network suggests the moves, and the value network evaluates the positions, both through a process of reinforcement learning. Neural networks are sophisticated systems that, to put it in a way that avoids nigh incomprehensible jargon, are designed to function in a manner approximating living processes. 

In other words, their performance does not hinge on fixed algorithms; instead, they learn by example, adapt and improve, recognising patterns that are too subtle for human cognition and providing solutions by collating and comparing data from the examples and patterns that they have had to process in the past. 

The promise that this kind of program holds is best envisioned as a marriage between the memory and calculative powers of a computer and the intellectual responsiveness and dynamism of a human being, amounting almost to an all-purpose problem-solving mechanism: artificial intelligence, no less. 

It would be a mistake to assume that networks of this nature are recent discoveries. In fact, they have been around and in action for over half a century now. The first one to have achieved real-world applicability was developed by Stanford University professor Bernard Widrow in 1959. MADALINE (Multiple ADAptive LINear Elements), as it was christened, is still used to eliminate echoes on phone lines. However, this was followed by a two-decade lull due to the dominance of traditional network architectures. Upon its revival in 1982, it found itself was handicapped by the limitations of processors that compromised research pace. Thus, while advancements in neural networks hold the potential to automate tasks and significantly reduce errors in medicine, engineering, vehicular control, traffic control, water treatment, etc, it remains largely untapped. 

It must also be remembered that, for a software to perform at the level of AlphaGo, the hardware it is powered by must be commensurate

And this is precisely what makes the success of AlphaGo so potentially pivotal in the long run: we now have a neural network program that can consistently outsmart the best human minds at a game that has proved to be unwinnable through conventional algorithmic analytics. 

That it constitutes a breakthrough, bringing us a step, maybe several steps, closer to a bewildering, maybe beguiling, array of benefits for mankind, is undeniable and central to the hype that has surrounded the match. At the same time, it bears keeping in mind that the relationship between what glitters and what is gold can never be taken for granted. For instance, cynics can look to South Korea and China for somewhat more jaded perspectives on AlphaGo’s triumph, or turn to the ever-growing literature that views AI askance for the tall claims and premonitions of apocalypse associated with developments in the field. Apart from the fact that it is not clear exactly how Google intends to take it forward from here, it must also be remembered that, for a software to perform at the level of AlphaGo, the hardware it is powered by must be commensurate; to be precise, ‘280 GPUs and 1920 CPUs . . . significantly more computational power than any prior reported Go program used, and a lot of hardware in absolute terms.’ A lot of hardware, i.e. a lot of power. 

The obvious conclusion to draw from the above is that there is much to address before our expectations of DNNs and AI can be realised. The less obvious one is that Lee Sedol, who is human and therefore susceptible, unlike AlphaGo, to such things as nerves, fatigue and disheartenment, held his own for a game against the equivalent of more than 3000 processing units. He, too, surely deserves a long round of applause. 

The defeat of the world’s best player of Go at the hands of a machine is a seminal event for many reasons, but let’s not get ahead of ourselves
Dhruba Basu Delhi 

Read more stories by The future unfolds on a 2,500-year-old board