Update: AlphaGo won the historic match against international Go champion Lee Sedol, 4 games to 1, and took home the $1 million prize in South Korea on March 15. Sedol has publicly apologized for losing, "tapping an undeniable melancholy" within the Go community over the computer's win. But before bowing, the Go champ came back from a 3-0 deficit to win game four — suggesting that AI, like humans, can make mistakes, and that there's something to be said for human resilience.
The game of Go is considered by some to be a mystical experience. It’s said the 2,500-year-old Chinese game was brought down from the heavens by a king to help sharpen his son’s mind. Even without divine origins, the checkered board holds Taoist resonance — two players fill up an empty board with black and white pieces, struggling to capture territory. The game is a slow contention between power and balance: All pieces are equal in value. At any moment it’s hard to tell which player is winning.
Go was slowly embraced by Buddhist and Confucian thought as a reflection of higher powers of the universe. Today it’s also hugely popular in Japan and South Korea, played by business executives, schoolchildren, and sports media alike as a tool for strategic thinking and spiritual practice. And for good reason: Go is known as the most complex game ever developed by humans.
“If played at one move per second, the longest possible game would outlast the 100 trillion years left for the universe,” proclaims Peter Shotwell in Go! More Than a Game.
If eternity really is found in the hearts of men, it may look something like a game of Go.
And Google just won it.
The Scene In Seoul
On March 9, a new artificial intelligence software, AlphaGo — created by DeepMind, a Google-acquired AI firm — faced off against Lee Sedol, a 32-year-old world class South Korean Go champion and the “the Roger Federer of Go,” according to DeepMind founder and engineer Demis Hassabis.
The ongoing five-game series, played in Seoul and livestreamed nightly on YouTube, is a surprisingly high-publicity contest for a computer system that wasn’t supposed to even be possible for another 10 years. Sedol and the founders of DeepMind were front page news in South Korea on game day. More than 90,000 people worldwide watched the livestream during its first match on March 9 — and at least that many saw AlphaGo win.
If it goes on to win the whole series, AlphaGo will do what many have long considered impossible: beat humans at their own best game.
Go is not chess, aficionados will tell you — it’s exponentially more complex. (Recall Russell Crowe’s John Nash playing, and losing, a Go game in the film version of A Beautiful Mind — rumor has it that a cut scene involves the defeated genius furiously declaring, “I prefer chess.”) Go is most often played on a 19-by-19 board, meaning the first move alone has 360 possible options. The branching factor is huge — within five moves, the board might be in any one of 5 trillion arrangements.
"Go is a game primarily about intuition and feel rather than brute calculation, which is what makes it so hard for computers to play well," said DeepMind founder Hassabis.
It’s also one of the last pastimes not dominated by machines. For now.
Changing the Game
Computer scientists have been trying to catch up to the human brain for years. The most infamous breakthrough came nearly 20 years ago when IBM’s Deep Blue supercomputer beat international chess champion Garry Kasparov. For chess and similar games, AI systems relied on massive data capture — calculating every possible move, and choosing the right one.
With Go, a game with trillions of options, engineers at DeepMind developed something new — a hybrid of repetitive learning and something called “deep learning,” a complex decision-making web that mirrors neural pathways in the brain. After inputting every available online example of real-world Go formations, the machine trial-and-errored its way into learning the rules and parameters of the game.
AlphaGo adopts more intuitive functions than previous AI systems: experiential learning; pattern matching; considering a small possibility set of inputs (“only about 200,” according to Hassabis); sketching a best decision accordingly.
“AlphaGo is doing what a human grand master would do,” said Hassabis at the American Association for the Advancement of Science annual meeting in Washington, D.C., in February.
And he’s right — this shorthand doesn’t sound like a machine. It sounds like a very smart human. Or something smarter.
“Human brains are really good at doing lots of things, but what they’re not really good at is doing what computers do. They’re not really good at simulating every possible move,” Jason E. Summers, a scientist and computational simulation researcher, told Sojourners.
In the days leading up to the match, Go champion Lee Sedol picked up another nickname, a poetic tribute to noble futility — “John Henry.”
Does AI Dream … ?
Where human brains do uniquely excel is the realm of narrative. Artificial intelligence is both feared and mythologized in the world of technological development — a function, maybe, of our deep human craving for meaning. We want reassurance that human spirit is boldly pioneering a new future, or we want proof that human hubris is sowing our own destruction. Or both at once.
Questions of faith and the soul occupy a brain space very close to questions of intuition and meaning. If an AI system can win Go, one of the most profound games humankind has ever developed, can it also master the rules and inputs of a holy text? Can AI operate on the spirit of the law, as well as the letter? Can it comprehend the mind of God, or become a new god over us? And will the results be sublime or dreadful?
“Until very recently, the concept of an intelligent consciousness without a body was relegated to the world of religion,” wrote Danny Duncan Collum, writing professor at Kentucky State University and contributing editor for Sojourners magazine.
And religious people especially tend to view AI with fear or discomfort.
“There’s some concern over the prominent humanist bias put forth by Ray Kurzweil, that we’re going to upload our brains to machines, that we’re elite machines — that the totality of humanity can be represented by a machine,” Summers said.
The scientist elaborated on those concerns in a blog post about faith and the singularity, explaining, "Christians [in particular] often reject Strong AI on the theological ground of the special anthropological status of human beings as the bearers of Imago Dei [the image of god]."
But it’s also a uniquely human tendency to extrapolate a handful of inputs into sprawling systems of meaning. What AlphaGo is doing is a breakthrough achievement, definitely — but so far, a very specific, technological one.
“The machines don’t really have goals [themselves],” Summers said. “I’d say the DeepMind system doesn’t actually have intuition. The more you know about what’s being done, the more you realize that they’re tremendously impressive systems. But the difference between them and an infant is significant.”
So all why the fanfare?
“The most important thing is how we did it,” Hassabis said at AAAS, noting that the software is not specifically tailored to Go.
Ideally, he said, DeepMind wants to achieve “out-of-the-box” thinking for any machine — one you can unwrap, take out of a box, input any goal, and press “go.” The exciting implication is that with its superhuman computation powers, AI may help us solve problems we don’t even see yet.
(Religion professor and WIRED contributor Alan Levinovitz has a more succinct take: Anthropomorphized algorithms make for a better story.)
Still, can AI be built to answer a problem set of “do the right thing,” or “work for peace,” or even just “be kind?” In short, could a system self-learn what’s good and ethical, rather than just what wins?
To Summers, the answer is yes, at least in part — deep learning also relies on observation. But his reasoning points to a much less grand narrative, albeit one that echoes Hassabis — how we build our AI systems matters.
The goals of any reinforcement learning system like AlphaGo are defined by outside users, and Summers points to deep learning networks like Google search algorithms and Facebook feeds as examples of this already at play.
“Values are externally imposed by the person who builds the system,” Summers said. “No system developed is neutral. If you build a legal informatics system and it tries to learn the law, there’s an implicit value structure built into that AI system. Its goals have ethical content, whether the person thinks about it or not.”
Maybe this is the simplest answer to the spiritual implications of AI: Machines, whatever form they take, will be infused with the morals of their masters. For now anyway, it’s our values, wisdom, and ethics that count.
Back In Seoul …
So, who’s going to win? Pragmatically, at least, the stakes are high — how AlphaGo performs will inform its developers on what to change; how Lee Sedol performs will inform him as to whether he’s $1 million richer and an even bigger international celebrity now.
But “Go vs AlphaGo” is also, inevitably, infused with meaning — as the announcers reiterated throughout the first game night, it’s a contest of human intuition matched against machine-assisted human ingenuity. And that struggle seems worthy of the world’s most challenging game.
In A Beautiful Mind, John Nash’s classmate taunts him just before winning the match: “What if you never find your great idea, John? ... What if you lose?”
I’m not sure we’re ready for the answer, but we’re about to find out.