The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
2016!Stockfish, the product of a decade’s worth of iterative development by bright humans, was at around Elo 3300, and yet AlphaZero blasted it with 28 wins, 72 draws, and zero losses out of 100 games after just 4 hours of playing itself.
But no surprise there, this is a walk in the park relative to go.
It’s plausible that AlphaZero is converging to “perfect” play in chess, which some have speculated as an Elo rating of as low as 3600.
It also independently discovered the major chess openings. Curiously, towards the end of its run, its favorite openings were the A10 English Opening (c4 e5 g3 d5 cxd5 Nf6 Bg2 Nxd5 Nf3) and the D06 Queen’s Defense (d4 d5 c4 c6 Nc3 Nf6 Nf3 a6 g3 c4 a4). In contrast, it displayed scant interest in the Sicilian Defense – an opening beloved of by many grandmasters – and when it did, it tended to perform relatively poorly. As the cold analytical eyes of AI get applied to more and more spheres, we will see the overturning of much “conventional wisdom.”