*Batty togetherness*

Experts on bats have been wondering why the flying mammals hunt in groups.

Israel bat colonies (photo credit: EZRA HADAD)
Israel bat colonies
(photo credit: EZRA HADAD)
Experts on bats have been wondering why the flying mammals hunt in groups. Now, recordings by Tel Aviv University researchers of bat voices during more than 1,100 interactions among them has provided the answer. They reported their findings in a recent issue of Current Biology.
When a bat seeks food at night, it depends on its sonar echoes to find and catch its prey. As Dr. Yossi Yovel and student Noam Cvikel of the zoology department put it, “It’s the ‘Bamba effect.’ In a dark theater, when somebody starts eating the peanut snack, everybody knows somebody is eating it and more or less where he is. Bats work in a similar way.”
When a bat locates a swarm of insects, so do other bats near it. Bats can use their active sonar to identify an insect from less than 10 meters away, but can hear when another bat identifies an insect from a distance of 100 meters, reducing the time they need to spend hunting.
The TAU researchers attached tiny sensors to the backs of bats and recorded the sounds the bats encountered – but the exercise wasn’t easy, because the tiny devices fell off within a week. The researchers eventually found 40 percent of them.
Yovel said he was able to look at the recordings and know when a bat was attacking prey and when it was “connected” with another bat instead. The team concluded that bats “connect” through sonar with the sounds of others to increase their chances of finding food. But there are limitations, because when contacting others, the bat does not concentrate as much on finding insects that fly by them.
ROBOTS COULD LEARN FROM YOUTUBE
How would you like your breakfast omelet to be prepared perfectly by a robot that learned how by “watching” Youtube videos? It might sound like science fiction, but a University of Maryland team has just made a significant breakthrough that will bring this scenario one step closer to reality.
Researchers at its Institute for Advanced Computer Studies partnered with a scientist at Australia’s National Information Communications Technology Research Center of Excellence to develop robotic systems able to teach themselves. Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can “think” for themselves, determining the best combination of observed motions that would allow them to efficiently accomplish a given task. The work was presented at the recent Association for the Advancement of Artificial Intelligence Conference in Austin, Texas.
The researchers achieved this milestone by combining approaches from three distinct research areas: artificial intelligence (the design of computers that can make their own decisions; computer vision (the engineering of systems that can accurately identify shapes and movements); and natural language processing (the development of robust systems that can understand spoken commands).
Although the underlying work is complex, the team wanted the results to reflect something practical and relatable to people’s daily lives.

Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


“We chose cooking videos because everyone has done it and understands it,” said computer science Prof. Yiannis Aloimonos. “But cooking is complex in terms of manipulation, the steps involved and the tools you use. If you want to cut a cucumber, for example, you need to grab the knife, move it into place, make the cut and observe the results to make sure you did it properly.”
One key challenge was devising a way for the robots to analyze individual steps appropriately, while gathering information from videos that varied in quality and consistency.
The robots needed to be able to recognize each distinct step, assign it to a “rule” that dictates a certain behavior and then string together these behaviors in the proper order.
“We are trying to create a technology so that robots eventually can interact with humans,” said team member Cornelia Fermüller. “They need to understand what humans are doing. For that, we need tools so that the robots can pick up a human’s actions and track them in real time.”
Aloimonos and Fermüller compare these individual actions to words in a sentence. Once a robot has learned a “vocabulary” of actions, they can then string them together in a way that achieves a given goal. In fact, this is precisely what distinguishes the team’s work from previous efforts.
“Others have tried to copy the movements. Instead, we try to copy the goals. This is the breakthrough,” Aloimonos explained. This approach allows the robots to decide for themselves how best to combine various actions, rather than reproducing a predetermined series of actions.
While robots have for decades been used to carry out complicated tasks such as vehicle assembly, these must be carefully programmed and calibrated by human technicians.
Self-learning robots could gather the necessary information by watching others, which is the same way humans learn.
“By having flexible robots, we’re contributing to the next phase of automation. This will be the next industrial revolution,” she said. “We’ll have smart manufacturing environments and completely automated warehouses. It would be great to use autonomous robots for dangerous work, such as to defuse bombs and clean up nuclear disasters. We have demonstrated that it is possible for humanoid robots to do our human jobs.”