AlphaGo Logo

The first computer program to ever beat a professional player at the game of go.

Re-Watch AlphaGo take on Lee Sedol, the world’s top Go player, in the
 Google DeepMind challenge.

Final score: AlphaGo 4 - Lee Sedol 1.

The Game of Go

The game of Go originated in China more than 2,500 years ago. The rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent's stones or surround empty space to make points of territory. As simple as the rules are, Go is a game of profound complexity. There are more possible positions in Go than there are atoms in the universe. That makes Go a googol times more complex than chess.

Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries. AlphaGo is the first computer program to ever beat a professional, human player. Read more about the game of Go and how AlphaGo is using machine learning to master this ancient game.

Match Details

After our program AlphaGo won 5-0 in a formal match on October 2015, against the reigning 3-times European Champion, Fan Hui, becoming the first program to ever beat a professional Go player in an even game; AlphaGo then went on to complete its ultimate challenge.

In March 2016 AlphaGo won 4-1 against the legendary Lee Sedol , the top Go player in the world over the past decade. The matches were held at the Four Seasons Hotel, Seoul, South Korea on March 9th, 10th, 12th, 13th and 15th and livestreamed on DeepMind’s YouTube channel as well as broadcast on TV throughout Asia through Korea’s Baduk TV, as well as in China, Japan, and elsewhere.

They were played under Chinese rules with a komi of 7.5 (the compensation points the player who goes second receives at the end of the match). Each player received two hours per match with three lots of 60-second byoyomi (countdown periods after they have finished their allotted time).

Nature Paper Details

Our Nature paper published on 28th January 2016, describes the technical details behind a new approach to computer Go that combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.

The game of Go is widely viewed as an unsolved “grand challenge” for artificial intelligence. Despite decades of work, the strongest computer Go programs still only play at the level of human amateurs. In this paper we describe our Go program, AlphaGo. This program was based on general-purpose AI methods, using deep neural networks to mimic expert players, and further improving the program by learning from games played against itself. AlphaGo won over 99% of games against the strongest other Go programs. It also defeated the human European champion by 5–0 in an official tournament match. This is the first time ever that a computer program has defeated a professional Go player, a feat previously believed to be at least a decade away.

External Links

The matches played against the reigning 3-times European Go Champion, Fan Hui, are available to view.

FAQ