Brain-machine interface (BMI) is a new and challenging engineering and neuroscience technology that aims to provide a channel from the brain to the outside world by mapping, aiding, enhancing, or mending human cognitive or sensory-motor processes. Many computational intelligence approaches, such as deep learning and transfer learning, have recently emerged. Deep learning approaches have had a lot of success in image and video analysis, natural language processing, and speech recognition. They’re also starting to show up in BMI.
Although computational intelligence methods for studying brain-machine interfaces have grown in popularity, many fundamental problems remain unsolved, such as deep learning representation of specific EEG-based BMI data from various modalities, data mapping from one modality to another to accomplish cross-source BMI data analysis, and identifying and utilizing relations between elements from a variety of sources.
Recent research led by Georgia Tech University’s Center for Human-Centric Interfaces and Engineering researchers has developed a BMI system that uses a neural network to analyze EEG data that can operate a robotic arm or wheelchair, allowing people to use the gadget to control a video game by merely visualizing the movement using a detecting system that is more comfortable to wear than previous systems. The technology consists of a soft wireless scalp electrical device that reads and converts neural impulses from the human brain into action using electro-encephalography (EEG). The team developed the Soft Scalp Electronics (SSE), a wearable wireless electro-encephalography (EEG) device for reading human brain waves.
Soft scalp electronics (SSE) is made up of three primary components that are portable and wireless:
- A low-profile, flexible circuit.
- Laser-machined stretchable and flexible interconnects.
- Numerous flexible microneedle electrodes for installation on the hairy scalp.
Unlike traditional EEG devices, which utilize rigid electrodes connected to the skin with gel or paste, the SSE employs multiple microneedle electrodes put in a flexible headband, making it simple to set up and use for long periods. The device’s electronics, which included a wireless link to a Bluetooth controller, were built on flexible substrates. The device records EEG signals generated by motor imagining (MI), in which the user imagines moving their hands or feet; the data is processed by a convolutional neural network (CNN) and utilized to operate a virtual reality (VR) video game.
In comparison to conventional electrodes, the SSE sensors employ microneedles to penetrate the scalp’s dry and dead skin cells, as well as a flexible construction that adapts to the wearer’s head, decreasing relative motion and attaining a greater SNR. The microneedles are arranged in six groups of roughly 6mmx6mm squares, giving them a more excellent spatial resolution than traditional electrodes.
Although, there is a significant limitation of SSE. Researchers must continually improve device quality to obtain better findings because they monitor impulses on the skin, across the skull, and via tissues. Simultaneously, they must continue to improve data analysis to improve accuracy.
The researchers trained a CNN model to categorize the signals collected by the sensor when users imagined executing an action, like opening and closing a hand or moving a foot, to utilize the SSE gadget as a controller. The model outperformed similar systems reported in earlier studies by achieving 93 percent accuracy on test data. The team used the system as a controller for a “rhythm-type video game,” in which players must complete specific tasks within a set amount of time to earn points; the test users achieved nearly the highest possible score with minimal mental effort, “only missing a few points per 5 minute game session.”
Other Source: https://www.infoq.com/news/2021/09/georgia-tech-brain-interface/