Why It's A Good Thing That A Machine Just Beat A Human At Go

by - 5:54:00 AM

Lee Sedol shakes hands with Demis Hassabis, CEO of DeepMind, after finishing the final match in Seoul.

Jeon Heon-Kyun / Getty Images

For a moment, it seemed like the machine might lose, again. Early into its fifth game of Go with top South Korean player Lee Sedol, and two days after chalking up a loss, Google’s AlphaGo made what its maker called yet another “bad mistake.”

A second loss wouldn’t have changed the fact that the artificial intelligence–powered software had already won the five-match tournament overall. But in the final showdown, the program’s misstep still made for nail-biting gameplay (very slow nail-biting; the match lasted six hours). “Maybe Alpha is really just sort of remembering what happened in game four and getting a bit upset,” commentator and American Go professional Michael Redmond joked after a series of confounding moves.

AlphaGo doesn’t have emotions, of course, so it didn’t feel joy or an adrenaline rush when it ultimately eked out a victory in what DeepMind’s founder called “the most exciting and stressful” of the five games on Tuesday. But to many artificial intelligence experts, Go players, and viewers who tuned in to the live-streamed tournament in Seoul, South Korea, it was impressive and a little awe-inducing to watch a supercomputer destroy a board game widely regarded as one of the world’s most complex.

Even if you were rooting for the machine, it was hard to miss the sadness in Lee’s words at the post-match press conference.

“Basically, I don’t necessarily think AlphaGo is superior to me, I don’t necessarily think that,” Lee told reporters. “I believe that there’s more a human could do to play against artificial intelligence — that’s why I feel a little bit [regretful].” Regarding conventional beliefs about how Go should be played, Lee said: “I have come to question them a little bit.”

youtube.com

On Go’s 19-by-19 grid, players claim territory by setting down black and white stones; every position comes with an average of 200 possible moves. Until this year, outside experts had thought the ancient game was at least a decade away from being mastered by a supercomputer, unlike chess, checkers, backgammon, and Jeopardy.

“A lot of people think Go is the last frontier of human board games. Of all board games, I don’t know of anything that has a bigger ‘search space’ than Go, so Go has the biggest possibilities — a practically infinite number,” Fei-Fei Li, director of the Artificial Intelligence Lab at Stanford University, told BuzzFeed News, referring to the vastness of the Go grid. “That’s why it’s a hard game, and humans take years to train on it and it takes a lot of brainpower — but this is where machines are good.”

AlphaGo is the creation of DeepMind Technologies, a British artificial intelligence company that Google acquired in 2014. In a January Nature paper, the DeepMind team revealed that their program had beaten European champion Fan Hui in a clean sweep in a five-game match. In short, the programmers trained AlphaGo to predict human moves by running it against millions of moves from games played by humans and having it play millions of simulated games against itself.

AlphaGo isn’t perfect, as evidenced by the apparently fatal mistake it made in game four and its stumble in the fifth game against a tesuji (a clever, unexpected move) from Lee, according to DeepMind CEO and co-founder Demis Hassabis. “It’s a little bit like a human,” Li said. “If we haven’t learned something, we sometimes don’t know how to make the right judgment.” Still, AlphaGo can learn from its mistakes — and learn much more quickly than a human can.

It also doesn’t get intimidated, tired, or distracted. “When it comes to [AlphaGo’s] skills, I don’t think AlphaGo is superior,” said Lee, the 18-time world champion, “but when it comes to psychological skills, yes, [AlphaGo] is definitely superior.”

Does this mean there is no game left for a supercomputer to conquer? Not exactly, Li said, pointing out that Go, for all its complexity, still follows “strong logic and a clear objective”: The pieces’ positions on the grid are visible to both players at all times. That’s not always the case; in mahjong, for instance, four players conceal, strategically reveal, discard, and pick up tiles with an eye on which tiles people are discarding and taking from other players. When gameplay relies on emotions — like deceiving players or intuiting what they might be thinking — computers have a long way to go, Li said.

Li emphasized that, contrary to popular portrayals of artificial intelligence, AlphaGo’s stunning success doesn’t make Terminator-like, humans-are-doomed scenarios any less far-fetched. But the fact that AlphaGo exists, and performs so well, is a human achievement of its own.

“It shows how good artificial intelligence has become, and I think there’s a lot of optimism about the power of A.I., and hopefully it will get more people to do A.I., use A.I., start companies and so on,” Li said. “It’s a great thing.”

Handout / Getty Images



You May Also Like

0 Comments