The shortage of caregivers for the seriously disabled is a major problem in Israel and around the world. Robots have been suggested for helping them, but until now, only one-armed machines have been created as assistive-dressing robots in the lab, and research has shown that this can be uncomfortable or impractical for the person in care.
The shortage of caregivers is a pressing challenge worldwide because of a decrease in birth rates combined with an increase in life expectancy. According to the World Health Organization, by 2030, one in six people in the world will be aged 60 years or over, and two decades later, those aged 60 years and older will double.
Scientists in the UK have developed a scheme that can mimic the two-handed movements of caregivers as they dress an individual. Prof. Jihong Zhu, a robotics researcher at the University of York’s Institute for Safe Autonomy, has designed a two-armed assistive dressing model that has not been attempted in previous research. He said that specific actions are required to reduce discomfort and distress to the individual in their care.
It is thought that this technology could be significant in the social-care system to allow caregivers to spend less time on practical tasks and more time on the health and mental well-being of individuals.
Zhu gathered important information on how caregivers moved during a dressing exercise, allowing a robot to observe and learn from human movements and then, through artificial intelligence, generate a model that mimics how human helpers do their task. This allowed him and his team to gather enough data to illustrate that two hands were needed for dressing and not one, as well as information on the angles that the arms make and the need for a human to intervene and stop or alter certain movements.
The team members published their findings in the journal IEEE Transactions on Robotics under the title “Do You Need a Hand? – a Bimanual Robotic Dressing Assistance Scheme.”
“We know that practical tasks, such as getting dressed, can be done by a robot, freeing up a care worker to concentrate more on providing companionship and observing the general well-being of the individual in their care,” said Zhu. “It has been tested in the lab, but for this to work outside of the lab we really needed to understand how caregivers did this task in real time.”
They used a method called learning from demonstration, which means that you don’t need an expert to program a robot; someone just needs to demonstrate the motion that is required of the robot, and it learns that action. It was clear that for caregivers’ two arms were needed to properly attend to the needs of individuals with different abilities.
They identified a key feature – the angle of the elbow hat affects the dressing action – and proposed an optimal strategy for the interactive robot using the feature.
“One hand holds the individual’s hand to guide them comfortably through the arm of a shirt, for example, while at the same time, the other hand moves the garment up and around or over. With the current one-armed machine model, a patient is required to do too much work for a robot to help them, moving their arm up in the air or bending it in ways that they might not be able to do.”
The team was also able to build algorithms that made the robotic arm flexible enough in its movements for it to perform the pulling and lifting actions but also be prevented from making an action by the gentle touch of a human hand or guided out of an action by a human hand moving the hand left or right, up or down, without the robot resisting.
Zhu said: “Human modeling can really help with efficient and safe human and robot interactions, but it important not only to ensure it performs the task, but that it can be halted or changed mid-action should that be needed. Trust is a significant part of this process, and the next step in this research is testing the robot’s safety limitations and whether it will be accepted by those who need it most.”