The top cause of death for women globally is breast cancer. Due to its adaptability, safety, and high sensitivity, ultrasonic imaging is currently the method that detects breast cancer that is most frequently used and most effective. Computer-aided diagnosis systems frequently consider the detection of breast lesions as a crucial step in helping radiologists diagnose breast cancer using ultrasonography.
However, due to hazy breast lesion boundaries, uneven distributions, and variable breast lesion sizes and positions in dynamic recordings, accurate breast lesion detection in ultrasound videos is difficult.
In order to detect breast lesions in ultrasound films, existing approaches typically perform breast lesion segmentation or detection in 2D ultrasound images or fused unlabeled videos with labeled 2D images. Convolutional neural networks have dominated outcomes in medical imaging, so it is extremely desirable to move breast lesion identification from the image level to the video level. This is because videos may use temporal consistency to resolve numerous in-frame ambiguities.
The lack of an ultrasound video dataset and pertinent breast lesion annotations, both of which are necessary for training deep models for breast lesion segmentation in ultrasound films, is the main barrier to this extension.
A video dataset for breast lesion diagnosis in ultrasonography for multiple examples of annotated films was recently produced by researchers from various universities in China and Singapore. Then, by combining video-level classification features and clip-level temporal features—which include a local temporal feature from the input video frames and global temporal information from shuffled video frames—they presented a novel network for improving breast lesion detection in ultrasound videos.
While an intra-video fusion module was created to fuse temporal features encoded among adjacent video frames, an intervideo fusion module was designed to carefully combine local features from the original video frames and global features from the shuffled video frames. In terms of detecting breast lesions in ultrasound movies, experimental results on the annotated dataset show that the network achieves new state-of-the-art performance.
Researchers from many universities in China and Singapore recently gathered and annotated 188 movies, creating the first ultrasound video dataset for breast lesion diagnosis. They also provided a feature aggregation network for enhancing breast lesion detection in ultrasound videos at the clip and video levels. The network’s major goal was to merge clip-level features with video-level features for detecting breast tumors in videos. Clip-level features were learned by creating intra-video fusion modules and inter-video fusion modules on the input ordered the video and a shuffled video. The network outperformed state-of-the-art approaches in breast lesion detection, according to experimental results on the annotated dataset. Additional work may involve gathering more video data, investigating a more methodical strategy, or performing challenging video shuffle operations.
This Article is written as a summary article by Marktechpost Staff based on the research paper 'A New Dataset and A Baseline Model for Breast Lesion Detection in Ultrasound Videos'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, and github. Please Don't Forget To Join Our ML Subreddit
Nitish is a computer science undergraduate with keen interest in the field of deep learning. He has done various projects related to deep learning and closely follows the new advancements taking place in the field.