Authors: Cazenave, Tristan | Chen, Yen-Chi | Chen, Guan-Wei | Chen, Shi-Yu | Chiu, Xian-Dong | Dehos, Julien | Elsa, Maria | Gong, Qucheng | Hu, Hengyuan | Khalidov, Vasil | Li, Cheng-Ling | Lin, Hsin-I | Lin, Yu-Jin | Martinet, Xavier | Mella, Vegard | Rapin, Jeremy | Roziere, Baptiste | Synnaeve, Gabriel | Teytaud, Fabien | Teytaud, Olivier | Ye, Shi-Cheng | Ye, Yi-Jun | Yen, Shi-Jim | Zagoruyko, Sergey
Since DeepMind’s AlphaZero, Zero learning quickly became the state-of-the-art method for many board games. It can be improved using a fully convolutional structure (no fully connected layer). Using such an architecture plus global pooling, we can create bots independent of the board size. The training can be made more robust by keeping track of the best checkpoints during the training and by training against them. Using these features, we release Polygames, our framework for Zero learning, with its library of games and its checkpoints. We won against strong humans at the game of Hex in 19 × 19
…, including the human player with the best ELO rank on LittleGolem; we incidentally also won against another Zero implementation, which was weaker than humans: in a discussion on LittleGolem, Hex19 was said to be intractable for zero learning. We also won in Havannah with size 8: win against the strongest player, namely Eobllor, with excellent opening moves. We also won several first places at the TAAI 2019 competitions and had positive results against strong bots in various games.
Keywords: Zero learning, board games
Citation: ICGA Journal,
vol. 42, no. 4, pp. 244-256, 2020
Price: EUR 27.50