Brain-computer interface (BCI) is a system that collects, analyzes, and interprets brain signals into commands that are sent to output devices to perform the desired tasks. BCIs do not use normal neuromuscular output channels. Instead, they use internal implants or external sensors, such as EEG electrodes, to assess cerebral activity. The idea is to turn this sensor data into a signal that can be used as computer input, allowing users to operate a computer or robot to perform tasks that they cannot perform physically. This frequently requires the user to imagine themselves doing a physical action, which causes cerebral activity that the BCI can detect and translate to computer input.
BCI’s primary purpose is to help persons with neuromuscular illnesses such as cerebral palsy, stroke, or spinal cord damage.
New research conducted by researchers from the University of Texas at Austin (UT) and Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) introduces a brain-computer interface (BCI) that allows users to change the motion trajectories of a robot manipulator. The study aimed to produce assistive robots that could be operated using a BCI to help paralyzed patients.
The team used inverse reinforcement learning (IRL) to learn a user’s preferences in as little as five presentations. IRL was used to adjust the routine’s parameters based on error-related potentials: signals decoded from EEG and EOG signals. This allowed the robot to learn how near it could get to a fragile object without making the user uncomfortable.
Billard’s team investigated how BCI could be used to change the behavior of a semi-autonomous robot manipulator because controlling a robot manipulator directly with a BCI could be time-consuming and exhausting due to the user’s persistent need. A semi-autonomous obstacle avoidance routine was implemented in the robot’s software. For this, the team captured users’ electroencephalogram (EEG) and electrooculogram (EOG) signals while using a joystick to control a robot manipulator. The system modifies the robot’s obstacle avoidance algorithm in response to the user’s error-related potentials (ErrP), EEG signals detected and decoded when the robot approaches an obstacle closer than the user expected.
The researchers used an IRL training method to make this change. Unlike traditional RL algorithms, in which a reward function scores a learning agent’s behavior, IRL learns both the reward function and the ideal action. This was demonstrated by users who used a robot manipulator with a joystick to move right or left in a workspace having a fragile barrier. The robot would attempt to avoid the obstacle in demonstrations as the manipulator approached it. If the user expected that the robot would not avoid the obstacle, the BCI’s ErrP signal changed the reward function and obstacle avoidance parameters.
The researchers discovered that their system could detect a user’s reward function in as little as three demos in a series of studies. The findings suggest their method is “resistant to the ErrP decoder’s natural variability and sub-optimal performance,” which is a beneficial attribute given that EEG sensing can be noisy.