UCLA and University of Houston (UH) researchers used deep learning to train a neural network that reconstructs OCT images using undersampled spectral data. The method attained superior quality without any spatial artifacts when given significantly less input than other methods. Generally, undersampled spectral data with standard image reconstruction methods typically results in severe spatial artifacts.
The researchers first demonstrated the efficacy of their deep-learning framework for OCT imaging by training a neural network using mouse embryo samples imaged with swept-source OCT. They then trained one single image reconstruction network and tested it on several human tissue types, where one sample was reserved for blind testing in each type. Throughout both phases, the network consistently achieved high-quality reconstructions to demonstrate its efficiency.
The neural network successfully reconstructs the 512 A-lines in a mere 0.59 ms while running on multiple GPUs with twofold undersampled spectral data (640 points per line). The trained system removed any spatial artifacts that were due to oversampling and those caused by missing pixels and lines of spectrum.
The trained network produced a perfect match with the images reconstructed using full spectral OCT data.
This new method can process 3× undersampled spectral data per A-line with some performance degradation in the reconstructed image quality compared to 2×. The team showed that this approach could be extended and applied it specifically for imaging, which improved overall imaging by using fewer spectral points per line.
The deep-learning-based image reconstruction method is an exciting advancement for OCT imaging. This framework does not require any hardware changes to the optical setup. It can be integrated with existing OCT systems to speed up the time spent acquiring samples’ images.