AI, Go, and Philosophical Argument
After more than four hours of tight play and a rapid-fire endgame, Google’s artificially intelligent Go-playing computer system has won a second contest against grandmaster Lee Sedol, taking a two-games-to-none lead in their historic best-of-five match in downtown Seoul. The surprisingly skillful Google machine, known as AlphaGo, now needs only one more win to claim victory in the match.
This article in Wired about the artificial intelligence Google is using to play Go is fascinating (via kottke). AlphaGo is coming up with “surprising” moves, and no one knows exactly what it will do next:
With its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. “It’s a creative move,” Redmond said of AlphaGo’s sudden change in tack. “It’s something that I don’t think I’ve seen in a top player’s game.”
The AI is programmed with thousands of moves, but then engages in machine learning based on that:
Hassabis and his team originally built AlphaGo using what are called deep neural networks, vast networks of hardware and software that mimic the web of neurons in the human brain. Essentially, they taught AlphaGo to play the game by feeding thousands upon thousands of human Go moves into these neural networks.
But then, using a technique called reinforcement learning, they matched AlphaGo against itself. By playing match after match on its own, the system could learn to play at an even higher level—perhaps at a level that eclipses the skills of any human. That’s why it produces such unexpected moves….
Once the system is trained using those machine learning techniques, it plays entirely on its own…
During the match, the commentators even invited DeepMind research scientist Thore Graepel onto their stage to explain the system’s rather autonomous nature. “Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said. “Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands—and much better than we, as Go players, could come up with.”
Where is the philosophical version of this AI, and when can we have it argue with itself? Will philosophical dialogues—this time the machine version—once again be the source of a new era of philosophy?
(Some previous discussion here.)
UPDATE: AlphaGo won for a third time.
Here are some thoughts as both a Go player (albeit not a very good one) and a philosopher, on the basis of a sketchy and probably faulty analogy: AlphaGo won’t really get humans significantly closer to mastering Go because of its sheer complexity. If philosophy is comparable, then AlphaPhilosophy might not amount to much more than a machine that is sharper at a certain argumentative game than we are – not especially satisfying if the goal of philosophy is truth.
To elaborate, Go is so complex that it can’t possibly be solved in practice. Brute forcing is not an option. This is of course why AlphaGo doesn’t approach the game this way: it has to be initially fed thousands of games by human players to give it human ‘intuition’ – pruning its move considerations (especially in the vast, vast opening) to match the kinds of moves humans would consider. Otherwise, it simply cannot function efficiently. Now, it combines this pruning with an amazing reading capacity, but the reading trees are also pruned along the lines of its human intuition. Even the remarkable shoulder hit in the second game is a move that professionals saw as the kind of move an amateur player might make (their assessment of its quality, they quickly realized, was mistaken). So, if humans are in some way deeply mistaken about the ‘truth’ of Go, if our opening theory and heuristics which are designed around carving out a way of playing the game in a manner that is understandable, then it isn’t clear that a machine like AlphaGo will ever be able to really challenge this in any deep sense . For example, as to the question of whether or not it is best for black to open with the center point of the board (tengen), a machine like AlphaGo can almost certainly never provide the answer.
Which isn’t, of course, to say that AlphaGo doesn’t teach us anything. It certainly can show how to play better in this sub-game of Go that humans have carved out. And certain questions, which are independent of the full logical space of the game, such as how to handle the endgame of a certain game state (and probably even middlegame), it can probably infallibly solve. But the further question “Is this endgame state the result of a game that was played out according to entirely mistaken beliefs about the best moves at the beginning of the game?” is not one it can ever answer.
When it comes to an AlphaPhilosophy, it would certainly be a very interesting machine, and maybe valuable in some ways, but unless the logical space of philosophy is vastly less complex than Go, I wonder how much of philosophical value it would be able to contribute. I suppose one might be more confident that when it comes to philosophy, humans are better at getting the right foundational premises (or that settling that question is a hopeless one anyway), or one might think that philosophy is mainly about exploring logical space, which an AlphaPhilosophy would certainly help with.Report
AlphaGo’s mastery of Go is apparently not yet complete – Lee Sedol just won after AlphaGo made a serious misplay and then subsequently lost the plot entirely.Report
Error 101: “philosophical dialogues” is not an analytic philosophy command — fail to compute: *terminate*Report