Game, set, match: Mr Robot

Monday, Oct 19, 2015, 02:19 AM | Source: Pursuit

Denny Oetomo

Robots are leaping into the sporting arena. Social media feeds are teeming with soccer-playing robots, running, rolling and stumbling across the pitch to kick goals.

Search for “dancing robots” on YouTube and you’ll soon meet a talented troupe of androids, twirling alongside their human creators or performing their own rendition of Gangnam Style.

Cool, calm and collected: these robots are much more graceful than us. Source: YouTube/Ted

Despite their dancing and soccer prowess (or lack thereof), there are many sporting pursuits our robotic friends are yet to conquer.

Inspired by his passion for tennis, Master of Engineering (Mechatronics) student Robert Chin set out to build his own tennis-playing robot as part of final year engineering design project.

“My ambition is to one day build a robot that could deliver a legal serve in tennis,’’ he says.

“With this in mind, I focussed my final year project on the crucial first step in this process: developing the algorithms that a robot could use to ‘see’ where the ball is and guess where it is going next.”

Mr Chin says that to help a robot see a ball, you would first need to think about the ball as an object that flies through the air. Robots can sense moving objects that are restricted to a single plane, such as the ground. However complexity increases when the object moves in three-dimensional space. For example, it is easier to teach a robot to catch a rolling ball than a thrown ball.

The motion of a thrown ball is more complex than a rolling ball

“Complexity increases for thrown balls because a whole new dimension is added – height above ground. There are a greater number of possibilities of where the ball could be,” says Mr Chin.

“But fortunately, we know a lot about how balls should behave in flight. When players impart spin on the ball, it creates a characteristic curvature in the trajectory that can be modelled by equations.”

We can use equations to model the trajectory of a serve

Watching tennis courtside or on television, we humans use player position and bounce location as cues to guess where the ball is headed next. Robots can use these cues as well.

But with players serving up to 263km/h, tennis is a fast-paced game. And our vision, which helps us play the game, is a sense that is already saturated with information. A tennis-playing robot would have a glut of information to work through, in a short amount of time.

“Compare this to a simple ruler. It measures one thing: length. It is a single number that a computer can process easily,” explains Dr Denny Oetomo, Mr Chin’s supervisor.

“Now let’s think of a camera frame, which consists of thousands of pixels. A video clip from a cheap webcam consists of around 30 frames per second, but a high speed camera, like one that could capture a high-speed tennis service, could have up to 5000 frames per second.”

“To detect a round object of a certain colour, the tennis ball, computational procedures extract information by scanning through thousands of pixels in one frame.”

“Compare that to measuring a single variable, such as length on a ruler, and you can see why vision is so computationally expensive.”

With this mind, the age of tennis-playing robots skilled enough to challenge Roger Federer or Serena Williams is still some years away.

The human visual system is remarkable at interpreting and processing the world around us. I think it would be a while before the reflexes and precision of professional athletes can be matched.

“The human brain has an extraordinary capability to learn and generalise and we can form an internal model of events through learning and practice. The ‘practice makes perfect’ exercise allows trained athletes to react to a high speed serve without needing to observe the trajectory of the ball. This is because they have learnt how the motions of their opponent connect to trajectory their serve,” explains Dr Oetomo.

To give this human ability to robots, Mr Chin employed an artificial intelligence technique called Particle Filtering.

“At every instant the robot guesses a location for where the ball might be, almost at random. It generates so many guesses, that one of those guesses must be close to the correct guess. We then measure how close each of those guesses are,” says Mr Chin.

The closer a guess is to the tennis ball’s actual location, the greater weight a ball has. Guesses with a heavier weight are kept for the next round of guessing, filtering out guesses or particles that are unlikely to represent the location of the ball. While this process is not used by humans to learn tennis, it does help robots build an approximate representation of the ball’s location.

The robot guesses where the ball could be
The closest guesses are kept to create an approximate representation of the ball’s position

“My project forms just one small part of the larger picture for tennis-playing robots, which is the ‘vision’ aspect. Using a single camera, I hope that in the future this research could be fully implemented with hardware and artificial intelligence to create a robot that can strike a ball.”

Aside from creating tennis-acing robots, Mr Chin hopes his design could applied to other sports and be used as the basis for a coaching tool.

“We could quantify how much speed and spin each tennis stroke produces, which would assist coaches in adjusting and improving their player’s techniques.”

Mr Chin’s project was featured at Endeavour, the University of Melbourne’s annual showcase of ingenious and inventive designs created by engineering and IT students.

University of Melbourne Researchers