Researchers at Universidade Federal de Pernambuco in Brazil Introduce a New Deep Learning-Based Framework for Robotic Arm Pose Estimation and Movement Prediction

Developers and roboticists will need to make sure that robots can work securely among humans when they are increasingly integrated into various real-world contexts. They have developed several methods recently for instantaneously determining the placements and predicting the movements of robots.

Researchers at the Universidade Federal de Pernambuco in Brazil recently developed a novel deep learning algorithm to determine the position of robotic arms and forecast their movements. This concept was created primarily to increase robot safety when working or engaging with people.

To increase the safety of people working near robots, it is necessary to foresee mishaps during human-robot interaction. Pose detection is considered to be a crucial part of the total solution. They suggest a brand-new Pose Detection architecture built on Self-Calibrated Convolutions and Extreme Learning Machine to achieve this.

The risk of a robot colliding with nearby objects can be decreased by accurately forecasting its future motions and intentions by estimating its current position. Two essential elementsโ€”an SCConv and an ELM modelโ€”make up Sadok and his colleagues’ posture estimation and movement prediction method.

The SCConvs component enhances the overall spatial and channel dependencies of their model. On the other hand, the ELM method is a recognized effective method for classifying data.

Scientists noted that no studies had been done that combined these two technologies for a particular application. So they decided to test whether this combination enhances their application. They also applied movement prediction, considered pose detection, and used recurrent neural networks to strengthen the framework.

Scenario sample โ€“ Human and Robot. Credit: Rodrigues et al.

A bespoke dataset of photographs of situations in which a robotic arm is engaging with a nearby human user was first assembled by scientists. They mainly used Universal Robots’ UR-5 robotic arm to produce these shots.

These photographs, especially the frames of the robotic arm, were annotated by the researchers. They could then train SCNet, the SCConv-based component of their approach, using the new dataset.

Researchers’ main objective was to reduce the observed error compared to other well-known designs like VGG or ResNet. They applied the EML at the network end and utilized SCNet to extract features. They then predicted the movement using the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) algorithms. They view this as a novel strategy for resolving the issue.

In a series of preliminary tests, researchers assessed the effectiveness of their framework by attempting to gauge the stance and forecast future motions of a UR-5 arm while assisting a human user with maintenance-related chores. They discovered that it produced highly encouraging results, accurately identifying the attitude of the robotic arm and forecasting its upcoming actions.

They consider creating a framework capable of recognizing a robotic arm’s attitude and movements to be their essential accomplishment, which will increase the arm’s safety. Additionally, they confirmed the capabilities of SSConv and EML when combined and broadened their usefulness.

Future work

The framework created by this research team can be utilized to increase the security of both current and future robotic systems. They might also modify and apply the SCConv and ELM algorithms for other applications, like human pose estimation, object identification, and object classification.

By utilizing the suggested models for risk assessment in a carefully controlled environment, researchers hope to broaden their approach. In addition to employing deep learning forecasting models to analyze the extracted vital points to detect potential risk or collision situations in the human-robot collaboration, they want to use the models for joint human and robotic position estimation. To create a more reliable and realistic system for collision detection, they plan to expand the mechanism to operate in a more complex environment with more humans, robots, and interactions. These additions should enable us to map potential collision scenarios even before they arise.

The framework could be extended to include human pose detection, and a robot and pose estimation will be jointly provided. By merging the two sets of information, the researcher can better categorize the amount of risk and work on the combined forecast of the two movements, preventing additional dangers from interacting like in an industrial plant.

This Article is written as a summary article by Marktechpost Staff based on the paper 'A framework for robotic arm pose estimation and
movement prediction based on deep and extreme learning models'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, ref post.

Please Don't Forget To Join Our ML Subreddit
๐Ÿ Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...