UT Austin Researchers Demonstrate a Deep Learning Technique That Achieves High-Quality Image Reconstructions Based on MRI Datasets

During a magnetic resonance imaging (MRI) scan, time seems to stand still for many individuals. Those who have experienced one understand the difficulty of remaining immovably still inside a buzzing, banging scanner for periods ranging from a few minutes to more than an hour.

Amazon research award winner and UT Austin Researcher, Jonathan (Jon) Tamir, is working on machine learning ways to reduce exam durations and extract more information from this necessary — but frequently unpleasant — imaging procedure.

Source: https://www.amazon.science/research-awards/success-stories/how-new-machine-learning-techniques-could-improve-mri-machine-images

MRI devices provide images of our insides by using the body’s response to powerful magnetic fields and radiofrequency waves, which aid in detecting disease and monitoring therapies. An MRI scan begins with raw data, just like any other image. Researchers aim to improve how that data is acquired to quickly produce better photographs.

MRI data that isn’t a ‘ground-truth.’

Contrary to how patients inside them may feel, MRI machines move extremely quickly, taking thousands of measurements at tens or hundreds of milliseconds intervals. The order and frequency with which magnetic forces and radiofrequency currents are delivered to the area being surveyed affect the readings. Clinicians use customized MRI sequences tailored to the bodily area and goal.

An MRI technologist must collect all feasible measurements, working from low to high frequency, to provide the best image quality possible.

Each layer of data adds clarity and detail to the image, but gathering that much data takes much too long. Only a portion of the data can be obtained due to the necessity for speed.

Researchers are attempting to improve the methods for acquiring scans and the picture reconstruction algorithms that interpret the raw data. A significant issue is the scarcity of “ground-truth” data: “Compared to the rest of the machine learning field, that’s a pretty huge issue in medical imaging.”

An MRI’s final image has been post-processed to a few megabytes. On the other hand, the raw measurements could be hundreds of megabytes or gigabytes, and the scanner won’t save them. Various research groups put a lot of effort into developing high-quality datasets of ground-truth data that academics may use to train algorithms. However, these datasets are extremely limited.

Source: https://www.amazon.science/research-awards/success-stories/how-new-machine-learning-techniques-could-improve-mri-machine-images

Another problem is that many MRIs aren’t static images. They’re videos showing biological processes like the beating of a heart. In some instances, an MRI scanner is not fast enough to obtain fully sampled data.

Sampling at random

Machine learning algorithms that can learn from finite data and fill in the gaps in photos are being developed. There is a possibility of randomly collecting around 25% of the possible data from a scan and training a neural network to recreate a whole image using the under-sampled data.

Another option is to utilize machine learning to optimize the sample trajectory.

Random sampling is easy, but machine learning determines the ideal sample trajectory and determines which spots are the most essential.

A deep learning technique that attains high-quality image reconstructions established on under-sampled scans from fastMRI dataset and the MRIData.org dataset was presented at the Neural Information Processing Systems (NeurIPS) 2021 conference. Both are open to the public for study and education.

Other techniques for image reconstruction have relied on end-to-end supervised learning, which works well when taught on individual anatomy and measurement models but fails when confronted with the aberrations that occur in clinical practice.

Instead, a technique known as distribution learning was employed, in which a probabilistic algorithm learns to approximate images without using measurements. When the measuring procedure changes, such as when altering the sample trajectory and when the imaging anatomy changes, such as when moving from brain scans to knee scans that the model hasn’t seen before, the model can be used.

One focuses on representing data using hyperbolic geometry, while the other employs unrolled alternating optimization to speed up MRI reconstruction. An open-source MRI simulator distributed across GPUs to identify the best scan parameters for a given reconstruction.

A traditional MRI assembles the image using computations based on the fast Fourier transform, a foundational procedure for resolving frequency combinations. It takes an inverse fast Fourier transform to turn raw data into an image. This can occur in a matter of milliseconds.

However, in machine learning work, a Fourier transforms hundreds or thousands of times before piling on new sorts of computing. These calculations are carried out in the cloud by Amazon Web Services. From a scientific standpoint and therapeutic perspective, the capacity to do so as rapidly as feasible is critical. That’s because, even if the raw measurement approach speeds up the MRI, the clinician must still inspect the picture quality while the patient is there.

AWS Lambda was used in addition to AWS cloud services to break down the image reconstruction pixel-by-pixel, delivering tiny amounts of data to multiple Lambda nodes, conducting the calculation, and then collecting the results.

Paper: https://arxiv.org/pdf/2108.01368.pdf

Github: https://github.com/utcsilab/csgm-mri-langevin

Reference: https://www.amazon.science/research-awards/success-stories/how-new-machine-learning-techniques-could-improve-mri-machine-images