The dances bees use to tell each other where nearby flowers are has inspired motion-based language robots can use to communicate with each other in areas without reliable network connections, according to a new study published in the peer-reviewed Frontiers in Robotics and AI on Thursday.
When bees need to tell other members of their hive where to find food, they communicate the location by wiggling their backside. The direction of the wiggle communicates the direction of the food source with respect to the hive and the sun and the duration of the dance lets them know how far away it is.
The researchers decided to take this method of communicating locations and apply it to robotics. While robots usually communicate with each other using digital networks, a reliable network connection is not always possible to provide.
The researchers decided to overcome the need for a digital network by designing a visual communication system for robots using onboard cameras and algorithms that allow the robots to interpret what they see.
In order to test the new method, the scientists had a messenger robot supervise and instruct a handling robot to move a package in a warehouse. The human communicating with the messenger robot uses gestures, like a raised hand with a closed first, in order to relay instructions.
The messenger robot then interprets the gestures using cameras and skeletal tracking algorithms. The robot then conveys this information to the handling robot by positioning itself in front of the handling robot and tracing a specific shape on the ground. The orientation of the shape indicates the direction of travel and the time it takes to trace the shape indicates the distance.
In computer simulations, the robots interpreted the gestures correctly 90% of the time, while in real-life tests with real robots and human volunteers, the robots got it right 93.3% of the time.
This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake spacewalks,” said Prof. Abhra Roy Chowdhury of the Indian Institute of Science, senior author of the study, in a blog article on Frontiers.
“This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added Kaustubh Joshi of the University of Maryland, the first author of the study.
"The applications of this framework in the real world are limitless."
Kaustubh Joshi and Abhra Roy Chowdhury
In the study, the scientists added that future studies could examine having the robots recognize the intensity of human gestures and train the robots to better estimate the trajectory they need to take. "The applications of this framework in the real world are limitless," wrote the scientists.