Adversarial search is a method applied to a situation where you are planning while another actor prepares against you. Your plans, therefore, could be affected by your opponent’s actions. The term “search” in the context of adversarial search refers to games, at least in the realm of artificial intelligence (AI).
Read More about “Adversarial Search”
Adversarial search problems typically exist in two-player games where the players’ actions alternate. Quick examples that come to mind are chess, checkers, and tic-tac-toe. The chess tutorial video below, for instance, teaches three different opening strategies. Your next moves, however, would entirely depend on your opponent’s moves. The other player’s countermoves, on the other hand, would also be dependent on your opening moves, and so on.
Adversarial Search and Games
Adversarial search or the games that use it have intricate links to AI. Indeed, AI is making a difference in almost every sector, including gameplay. One can say that the other way around is also correct—games also play a significant role in AI research.
Pioneer AI researchers, for one, used chess to test the intelligence of their creations. The premise is that when a machine beats a human being at a game like chess, that machine has human-like intelligence.
In 1997, IBM’s supercomputer dubbed “Deep Blue” did just that. And it didn’t beat just any person, but the world chess champion Garry Kasparov. In an essay, Kasparov wrote about his first game with Deep Blue in 1996. He said that while he played numerous times against a computer, his match against Deep Blue was different. He sensed a “new kind of intelligence across the table.” During a rematch in 1997, the chess master lost to the supercomputer. In this instance, we can deduce that Deep Blue was better at adversarial search.
Indeed, Deep Blue has enormous computational power. It can consider 100–200 billion positions per second and has around 4,000 positions in its opening book.
In 2006, another world champion, Vladimir Kramnik, was defeated by a machine. This time, it was by Deep Fritz, a German chess computer program.
Checkers is another two-player game that can use adversarial search. A computer program called “Chinook” was developed specifically to play in the World Checkers Championship. And in 1990, it reached its goal. Chinook was the first computer program that won the right to play for the World Checkers Championship.
In contrast to Deep Blue and Deep Fritz, however, Chinook’s knowledge of the game was not learned via AI, as its developers programmed everything. Still, we can’t discount the fact that it is a powerful application. It has a search space of 5×1020 or 500,000,000,000,000,000,000 sets of possible moves (that’s 500 quintillion, in case you’re wondering).
AlphaGo is a computer program designed to learn and master the 3,000-year-old board game “Go.” It uses machine learning (ML) and deep neural networks and has indeed mastered the game, beating “Go” champions such as Lee Se-dol and Fan Hui.
Se-dol retired in 2019, declaring that “Even if I become number one, there is an entity that cannot be defeated.” He was referring to AI-powered “Go” opponents such as AlphaGo, which has an enormous search space.
Two-player games such as chess, checkers, and “Go” have come a long way with the help of adversarial search methods and other technologies. Although the same game rules are still used, it’s awe-inspiring to think that intelligent machines can also play against humans. People can hone their skills in these types of games by playing against machines.