Part 1: The Quest for Artificial intelligence - The Deep Blue story
In the early months of 1997, within the majestic Equitable center building in midtown Manhattan, occupying the 32nd floor in its entirety, sat one of the most powerful computing machines ever built by human hands. It sat there quietly (unconscious, of course) whirring in its mechanical glory, cooled by a dozen fans, powered by its terrorizing computational efficiency, weighing thirty-two tons in all, containing 32 nodes (a node is a machine) with each of them equipped with an integrated chip—the heart of a computer—capable of performing more than hundred million computations per second. Together, all the chips working in parallel could churn out results at exponential speeds and efficiency.
In building those chips lay the work of outstanding mechanical engineers, informational scientists, software developers, student researchers and corporate support. The circuitry of each chip was algorithmically built and refined over twenty years of pioneering academic work, all for just one single purpose — to play chess as well as humans do, if not better.
This behemoth of a machine was christened “Deep Blue” by IBM (who sponsored this effort). The prefix “Deep” is derived from the name of the fictional supercomputer that featured in Douglas Adam’s popular book “The Hitchhikers guide to the Galaxy”.
For long, the game of chess has been considered the archetypal symbol for human intelligence. The spectacle of the chess board with its sixty-four squares in black and white; the coins on the board metamorphosing into rival armies as kings, queens, bishops, horses, pawns and rooks; the curious rules, strategic openings and the rigid movement of the pawns across the board, the unbelievable number of documented attacks and defenses; the subtlety of the psychological warfare between players, and above all, the capacity of the human brain to think ahead and visualize the possible state of the coins on the board, based on the current move - makes chess a highly cerebral game which need an extraordinarily nimble brain to store, assess, refactor and compute over long stretches of time.
To a large extent, it is all to do with muscle memory. In general, chess is played and relished only by people with an aptitude to retain information coupled with exceptional computational skills. The picture of the man playing chess with his heads bowed down, elbows resting on the table with chins held within the palms, eyes deeply focused, concentrating, oblivious of the surroundings and completely absorbed in the intellectual battle at hand - has provided an irresistible imagery of chess epitomizing supreme intelligence suffused with artful maneuvers.
There was something about chess that went beyond the mathematics of it, and culturally, winning a game of chess became a measure of intelligence. Even Claude Shannon, the founding father and arguably the man behind the evolution of information science in the late 1950s worked out a strategy for computers to play chess. Shannon insightfully perceived the deep connection between representing information as binary rules, and the ability to apply them to a game such as chess. But to him, and many others after Shannon, such efforts were merely intellectual experiments without any conviction of immediate consummation. The sheer complexity of predicting the next move, or series of move after that, involved an algorithmic complexity that humans seem to effortlessly possess, and so difficult for a machine to emulate.
Great players “seem to know” what their opponents are thinking, and are able to devise a strategy based on the context. With computers, there is no context, every move has to be tested and evaluated from scratch with no sense of ’knowing’. But the disadvantage of context is often offset by the relentless computational power and the absence of any conscious attachment to the task at hand. In his wonderful book “Godel, Escher and Bach”, the brilliant cognitive scientist Douglas Hofstadter hinted at the possibility that someday machines may be computationally fast and powerful enough to play and possibly win at chess, but he wasn’t in favor of such success. In a poignantly prescient statement professor Douglas wrote in his book: “What bothers me is the degree to which something incredibly simpler than our brain is starting to be able to do things that we do in surprisingly strong ways. It's taking away from the complexity of what we really are.”
Chess represented a unique capacity of the human brain. And to program machines to take over that capacity seemed dehumanizing and demeaning too, in some respects. It is fascinating that the real race between Man and Machine in chess began as a student research project in early eighties. When Feng-hsiung Hsu arrived in the USA from Taiwan in 1982 with a Bachelor’s degree in Electrical Engineering, the only thing he was certain of was his conviction that a machine that could compute faster would be a great advantage in playing chess. His Masters in Carnegie Mellon focused on integrated circuits - those mysterious connections that pulse electric signals through the labyrinthine circuitry of the chip (getting smaller each day) to perform hundreds and millions of computations. It is the design of the chip and its intricate circuits that power a computer. Moore’s law - the famous prediction that computing power will increase every two years - is all about how many transistors (channels of communication and computation) can be packed in a tiny microchip. There is a physical limit, but in the eighties, the field was open, and Moore’s law was still active.
Hsiu worked with three different teams and projects before IBM adopted him in the nineties to work on Deep Blue. In 1988, a few years before Hsiu joined IBM, “Deep Thought”, the machine that Hsiu had helped build earlier had lost to Russian Grand Master Garry Kasparov, the reigning world champion.
Hsiu learnt a number of things from that loss. The foremost being that the number of strategic moves a computer had to evaluate was still not equal to the task. Human’s instinctively knew what to play, and what to avoid, but a machine has no such self-conscious knowledge. The specific algorithm designed for chess was called the “evaluation function”, and its job was to apply brute force to each move. What it means is taking each move as a discreet unit and evaluating all possible options from that point onwards. The machine cannot reject any move based on “experience”. Its incredible efficiency lay in slogging through the million sets of evaluations needed to assess the impact of a single move on the chess board. Its strength was its relentlessness and tirelessness.
When IBM announced the Deep Blue challenge, Hsu knew what needed to be done. He must find a way to pack more computations on a single chip, and the software (evaluation function) must be fed with sufficient training data. Both of which needed money, effort and co-ordination, and IBM was willing to put up the stakes necessary.
About the Author
Balasubramaniam N has more than 25 years of experience in IT, with special emphasis on programming languages and databases. In the last decade Bala has focused on evangelizing the full stack development along with platforms, tools and techniques for Big Data and Data Analytics. Bala is passionate about training and has supported the training needs of customers across the globe in skilling their employees to work of specific projects, and in demonstrating Proof of concept (POC) in transforming business processes using refined technical toolsets. At NIIT, Bala heads the Center of excellence for technology for the Corporate learning business, and also leads the Tech Academy - an internal initiative in skilling a qualified pool of mentors across domains. He continues to teach as much he can.