Federated learning is a new way to train artificial intelligence models with data from multiple sources while maintaining anonymity. This removes many barriers and opens up the possibility for even more sharing in machine learning research.
The latest results published in Nature Medicine show promising new research wherein the federated learning models build powerful AI models that can be generalized among healthcare institutions. These findings are currently for the healthcare industry. It shows that further down the line, it could have a significant role in energy, financial services, and manufacturing applications. Given the pandemic, healthcare institutions decided to take matters into their own hands and work together and found out that institutions in any industry can develop predictive AI models, and collaboration amongst professionals could set new standards in the domain of both accuracy and generalizability, the two factors that usually do not work together.
There are a lot of limitations for AI models when it comes to data. Data can be biased, small organizations don’t have enough information or resources in their training set. Even big datasets might not give the complete picture if they come from different sources with differing demographics.
To build a robust, generalizable model with your data you will need enough training examples. But in many cases, privacy regulations limit the ability for organizations to share their patient medical records or datasets on common supercomputers and cloud servers.
Federal learning is the solution to the above problems. As per the latest research, Dubbed EXAM (for EMR CXR AI Model) published in Nature Medicine by NVIDIA and Mass General Brigham , the researchers collected data from 20 hospitals across five continents. This data was being used to train a neural network to predict the requirement of medical oxygen required by a patient with COVID-19 symptoms within 24-72 hours of arrival in the Emergency room. According to the research team, this is one of the most extensive study of federated learning till now.
With Federated Learning, the EXAM Collaborators created a model that learned from every participating hospital’s chest X-ray images and lab values without ever seeing private patient data stored in each location. A copy of this neural network was created at a global scale by training on local GPUs sent periodically back up for analysis during runtime while also being aggregated together into one version across all collaborating hospitals to be updated with new weights. This was like an exam answer key without sharing any of the study material used to develop the solutions.
The EXAM global model, shared with all participating sites, resulted in an improvement of 16% of the AI model’s average performance. Researchers saw 38% greater generalizability when compared to models trained at any single site.
Related Paper: https://www.nature.com/articles/s41746-020-00323-1