Researchers at UCSF and UC Berkeley have Developed a Brain-Computer Interface (BCI) that has Enabled a Woman with Severe Paralysis from a Brainstem Stroke to Speak through a Digital Avatar

Artificial Intelligence plays a crucial role nowadays in speech and facial recognition. These signals from face and speech are being recorded and synthesized by brain signals which enables the drastic use of AI. The technology can also encode and decode these signals into the text at an alarming larger rate which makes the system even stronger. These techniques are even new for AI to diagnose but still, AI has provided fantastic results for speech recognition. These techniques of speech and facial recognition fall under a broad category of Natural Language Processing. Researchers are still in search of AI, which can enable speech and facial recognition, for people with paralysis.

Researchers at UC San Francisco and UC Berkeley have developed a Brain-Computer Interface. It is also called a smart brain, which is a direct communication pathway between the brain’s electrical impulses and an external device which is most probably a robot or an AI chatbot. It has shown results like a woman with paralysis can even speak with the digital avatar. These researchers aim to embed a strong communicating body which is the most natural way to communicate with others. AI has given a solution to patients undergoing paralysis or unable to speak. Researchers implemented a rectangle-shaped electrode on the surface of the woman which is critical for speech recognition. These electrodes implemented the brain signal or impulses that travel along her body making her recognitions easily available for her. Researchers trained the model and improved the Artificial Intelligence algorithms that the system used improving the accuracy of the system. This also involved iterating different phrases again and again and using a bag-of-words model of NLP in the system. The researchers also trained the AI model in such a way that it can also recognize and encode these words to phonemes. This enhanced the accuracy of the system and made it faster than previously.

Researchers developed an algorithm to generate her voice using her recorded voice from her wedding. The sound has some defects but later on, the researchers worked on improving the quality of voice which was generated previously. These researchers also developed Digital Avatar which helped the woman in her face recognition. The Researchers also created a Machine Learning model that can merge Avatar with the brain signals of the woman. It recorded every movement of her jaw, lips, tongue, mouth, and every other organ.

Researchers are still working on this setup for a wireless connection between the human body and the software. This would be the next version of this model. The main advantage of this model would be the non-direct contact. Various Deep Learning algorithms are still being worked on this wireless model. Hyperparameter testing is carried out to improve the efficiency of the model.

Check out the Paper 1, Paper 2 and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Bhoumik Mhatre is a Third year UG student at IIT Kharagpur pursuing + M.Tech program in Mining Engineering and minor in economics. He is a Data Enthusiast. He is currently possessing a research internship at National University of Singapore. He is also a partner at Digiaxx Company. 'I am fascinated about the recent developments in the field of Data Science and would like to research about them.'

[Announcing Gretel Navigator] Create, edit, and augment tabular data with the first compound AI system trusted by EY, Databricks, Google, and Microsoft