Google AI robot plays tennis with humans in new research venture: Check what happens next

Google AI company DeepMind has produced a robot capable of playing table tennis at a near-professional level. The unusual discovery was made during a recent research venture that saw the AI-driven entity successfully take on several skilled human opponents before going down to some advanced players.

The DeepMind robot played competitive matches against table tennis players of varying skill levels — beginner, intermediate, advanced, and advanced+ as determined by a professional table tennis coach. Standard table tennis rules were followed with some modifications (because the robot is physically unable to serve) as the humans played three games each against the machine. This incidentally marks the first instance of  learned robot agent reaching amateur human-level performance in competitive table tennis.

“The robot won 45% of matches and 46% of games. Broken down by skill level, we see the robot won all matches against beginners, lost all matches against the advanced and advanced+ players, and won 55% of matches against intermediate players,” the Google DeepMind website explained. 

According to the DeepMind study participants, the robot was a ‘fun’ and ‘engaging’ opponent that they would like to face during future games. Advanced players were however able to exploit weaknesses in its playing style — with some noting that the robot was not good at handling underspin. It is pertinent to note here that Table Tennis can be a physically demanding sport which requires human players to undergo years of training to achieve an advanced level of proficiency.

“Table Tennis requires years of training for humans to master due to its complex low level skills and strategic gameplay. A strategically suboptimal – but confidently executable – low level skill might be a better choice. This sets Table Tennis apart from purely strategic games such as Chess or Go,” DeepMind adds.

The development also comes mere days after the company published results indicating that its new AI models in development solved four out of six questions at the 2024 International Math Olympiad. AlphaProof and AlphaGeometry 2 solved one question within minutes, but took up to three days for the rest — longer than the time limit of the competition.

Leave a Comment