It has been affirmed that the asymptomatic people infected with Covid-19 do not exhibit the disease’s visible physical symptoms. Thus they are less likely to get examined for the virus and unknowingly spread the infection to others around.
Recently researchers at MIT have discovered that asymptomatic people differ from healthy people in the way they cough. Although the differences are not decipherable to the human ear, it is stated that artificial intelligence can be employed to discover them. Takeda Pharmaceutical Company Limited supported the research.
Several healthy individuals have voluntarily submitted forced-cough recordings. The researchers at MIT have trained the model on a large number of such samples of coughs and oral words. The AI model distinguishes asymptomatic people from healthy individuals when a new cough sample is fed. The team is now incorporating the model into a user-friendly app that will be a free, easy, and non-invasive pre-screening tool to identify people expected to be asymptomatic for Covid-19. It will be adopted on a large scale if FDA-approved. A user can then login to the app daily, cough into their phone, and immediately get information if they are infected.
Before the pandemic’s onset, research groups had already been training algorithms on cell phone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. Similarly, the MIT team was working on developing AI models that analyze forced-cough recordings to detect signs of Alzheimer’s disease, which is associated with neuromuscular degradation, such as weakened vocal cords along with memory decline.
Firstly, a neural network known as ResNet50 was trained to discriminate sounds associated with different vocal cord strength degrees. The research showed that the quality of the sound “mmmm” could indicate how weak or strong a person’s vocal cords are. The researchers then developed a sentiment speech classifier model trained on a large dataset of actors intonating emotional states, such as neutral, calm, sad, and happy. A third neural network was trained on a cough database to discern changes in lung and respiratory performance. Lastly, all three models were combined, overlaying an algorithm to detect muscular degradation.
A remarkable relationship
The team found growing evidence that patients infected with coronavirus experienced similar neurological symptoms as Alzheimer patients, such as temporary neuromuscular impairment. So they questioned if their AI framework for Alzheimer’s would work for diagnosing Covid-19 as well.
The sounds of talking and coughing are affected by the vocal cords and organs surrounding them. When an individual talks, a part of their talking is like coughing, and vice versa. AI can pick up things from the cough that we derive from a speech like a person’s gender, mother tongue, age, or even emotional well-being. The team says that there is sentiment embedded in how an individual coughs. Seeing the similarity between the two, they verified and confirmed the Alzheimer’s biomarkers for Covid.
The team discovered that the AI framework originally meant for Alzheimer’s discovered patterns in the four biomarkers of vocal cord strength, lung and respiratory performance, sentiment, and muscular degradation are specific to Covid-19. The team stated that the model accurately detected 98.5 percent coughs from people confirmed with Covid-19 and asymptomatic coughs.
The AI model is not intended to diagnose symptomatic people regarding whether their symptoms are due to Covid-19 or any other infirmities like flu or asthma. The model’s potency rests in its ability to recognize asymptomatic coughs from healthy coughs.
The team is now working with a company to develop a free pre-screening app based on their AI model. They are also partnering with several hospitals worldwide to collect a more extensive and diverse cough recording set, improving training and strengthening the model’s accuracy.
The team says that the pandemic could become a thing of the past if pre-screening tools are used, imposing a constant check. They also state that these AI models would be incorporated into smart speakers and other listening devices so that people can quickly get an initial assessment of their disease risk.
Shilpi is a Contributor to Marktechpost.com. She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.