Over one million American adults use wheelchairs fitted with robot arms to help them perform everyday tasks. These assistive robotic arms help people in their day-to-day lives with tasks like dressing, brushing, eating, etc. However, such devices now in the market can be hard to control. Certain activities, like removing a food container or feeding a person, require fine manipulation and may take a long time.
A team of Stanford researchers developed a novel way of using a robot controller that blends two artificial intelligence algorithms to control assistive robotic arms that are more intuitive and faster than the existing approaches. In experiments, it performed more delicate applications like cutting tofu, putting it onto a plate, stabbing a marshmallow, scooping it in icing, and dipping it in sprinkles, etc.
About the algorithms
The typical assistive robots have 6-7 joints, which are difficult to control as each one of them needs switches between various modes on the joystick. Thus, among the two algorithms, the first one enables control in two dimensions on a joystick without the need to switch between modes, with dimensionality reduction. It uses contextual cues to determine where the user reaches. The second one allows more precise movements after it comes near the destination, with shared control of humans and robots. A person gives 2-D instructions on a joystick, and the robot can recreate the more sophisticated, context-dependent actions that the expert has trained it to do.
Further, the latent action algorithm gets blended with shared autonomy. In shared autonomy, the robot is fed with a “set of beliefs” about what the controller is trying to tell, and based on that, it gains confidence about the goal. For example, there are two cups of water, either of them to be picked. If the joystick directs it a little towards the one, it gains confidence towards the aim. Thus, the precision comes with shared control.
These modifications enable the robot to be more intuitive, easier to use, and faster. There’s a lot of work to do before impacting the lives of people with disabilities.
Source: https://hai.stanford.edu/blog/assistive-feeding-ai-improves-control-robot-arms
Related Paper: https://arxiv.org/abs/2005.03210
Shilpi is a Contributor to Marktechpost.com. She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.