Researchers Introduce OncoPetNet: A Deep Learning Based AI System For Mitotic Figure Counting in a Veterinary Diagnostic Lab

1792
Source: https://arxiv.org/pdf/2108.07856.pdf

Artificial intelligence (AI) has transformed industries all over the world, including the healthcare sector. From fitness bracelets to glucose monitoring, modern technology allows anyone to accurately and effectively monitor their health status at any time. AI’s promising advances in healthcare delivery, early medical discovery, and diagnostics offer everyone, including our pets, a better chance of receiving better and earlier treatment.

Studies suggest that one out of every four dogs and one out of every five cats may possibly develop cancer at some point in their lives. Cancer strikes dogs at a similar rate as it does to humans, but there is less data on cancer incidence in cats.

According to pathologists, the rate of cell duplication in a tumor indicates its severity. Mitotic count (MC), the count of cells that are currently dividing, is an essential quantitative metric and indicator in the diagnostic workup of a pet with cancer.

Pathology samples can be enormous and require multi-site sampling, resulting in up to 20 slides for a single tumor. Furthermore, the selection and quantitative assessment of mitotic figures by human-expert are very time-consuming and subjective.

Recent developments and availability of large datasets in deep learning (DL) have enabled algorithms to match professional performance in various medical imaging tasks, including skin cancer classification. Previous studies have looked into the possibility of using DL in whole-slide pathology images WSI to automate mitotic figure counting. However, due to limited datasets and no practice-based evidence in clinical translation, the impact of larger findings on the technology’s relevance in clinical settings is limited.

A new study conducted by researchers at Mars Digital Technologies, Antech Diagnostics, and Stanford University investigates the development of a DL(Deep learning)-based system OncoPetNet, to automate the detection and quantification of mitotic figures on H&E stained whole slide digital images (WSI) without excluding any tumor types. Their work also explores model deployment’s mechanism and subsequent impact in a high throughput (thousands) environment.

Data Collection

The team organized their data collection to support two distinct sub-tasks that the mitotic counting AI system needed to complete to achieve their goals. The data collection process was done independently for each sub-tasks:

  • Slide classification to determine if mitotic count should occur: They took the information from a pathologist’s workflow labeling tool for the biopsy slide classification.
Source: https://arxiv.org/pdf/2108.07856.pdf
  • Counting mitotic figure: Board-certified veterinary anatomic pathologists examined both snippets of differing sizes and WSIs of hematoxylin and eosin-stained slides to generate training sets of mitotic figures. Mitotic figures were annotated so that they may be included in training sets. Typically, training sets of mitotic figures are labeled merely for location, represented as one or multiple pixels in the mitotic figure’s center. However, the researchers used the full pixel masks of mitotic cells. Although time-consuming, this labeling strategy was simple to implement and allowed for more information flow during training.
Source: https://arxiv.org/pdf/2108.07856.pdf

The dataset used includes photos with a resolution of 150×150 pixels and images with 600×600 pixels. Due to the distribution probabilities of mitotic figures, smaller photos are sometimes relatively easy to label by domain experts. However, larger images take more work. Using larger images in inference, on the other hand, can provide speed and accuracy benefits. Therefore, all of the networks were trained by taking batches of data from both subsets and backpropagating the combined loss.

Model Architecture

They used a convolutional neural network method to train their biopsy slide classification model. A pre-trained ImageNet residual neural network (ResNet) architecture was employed as the backbone for feature extraction, followed by a fully connected layer classifier. As inference time was critical to the study, a ResNet-18 (18 layers deep) was chosen as the best compromise between computation requirements and performance.

Furthermore, they used a DL model and deterministic post-processing approaches to represent the identification of mitotic figures as a segmentation problem. This method enabled quick inference and highly adaptable model training. 

Their deep learning model was built in Pytorch and comprised three convolutional neural networks based on the encoder-decoder architecture with skip-connections (U-Net). They use Efficientnet-b5, Efficientnet-b3, and SE-Resnext as encoders, all pre-trained with ImageNet.

OncoPetNet is designed to work with all scans that are uploaded in real-time. As pathology images are large in size, they created a cluster of GPU-backed computers physically situated within the digital scanners. When fresh photos are scanned, the system is notified, and the image is downloaded to one of the cluster’s accessible servers. 

The system performs the following multi-step process when resources in the inference cluster are available, and the slide is downloaded: 

  1. WSI is downloaded to the local machine. 224 × 224 thumbnails are extracted.
  2. Execution of the Biopsy Slide Classification: If the Classifier returns a value of ‘no-count,’ The process is terminated, and the database is updated with a no-count status. If it returns ‘count,’ the process moves on to the next phase.
  3. Execution of tissue detection algorithm: This is a non-model-based algorithm that detects the foreground (tissue). To do so, they reduced the resolution to the point where each pixel represented a detector sliding window. After that, it applies thresholding to each pixel and returns the coordinates of tissue-containing pixels.
  4. The Mitotic Figure Detector model is implemented on all the coordinates. 
  5. Morphological filters are used to improve predictive performance once the pixel mask has been predicted. The increased pixel masks are utilized to calculate Mitotic Figure coordinates using a non-model-based approach.
  6. The 10HPF with the most Mitotic Figures is found using a k-d tree-based approach.
  7. Coordinates for Mitotic Figures and the 10HPF are written into an XML, and the results are reported into a database.
Source: https://arxiv.org/pdf/2108.07856.pdf

Results 

The researchers used a data collection of 41 canine and feline cancer cases to determine the accuracy of the AI-determined mitotic count. The cancer cases represent 14 different cancer types, with mitotic numbers ranging from very low to very high. 

The original manual mitotic counts (non-AI assisted MC) were accessible on each slide, and two pathologists performed them under conventional diagnostic pathology workflow settings. To obtain the AI-only MC, the model performed mitotic figure annotations and counts on all slides. Results demonstrate that the AI-only MC was higher than the Non-AI assisted MC across all tumor types. 

Source: https://arxiv.org/pdf/2108.07856.pdf

This is the first time that a DL algorithm has been successfully deployed at scale in a high-volume clinical practice for real-time expert-level performance on essential histopathological tasks.

Source: https://arxiv.org/pdf/2108.07856.pdf

Paper: https://arxiv.org/pdf/2108.07856.pdf

Source: https://medium.com/pytorch/how-ai-is-helping-vets-to-help-our-pets-e6e3d58c052e

Twitter Thread