A DEEP HYBRID LEARNING FRAMEWORK FOR RELIABLE BREAST CANCER DIAGNOSIS USING ULTRASOUND IMAGING
Keywords:
Breast Cancer Diagnosis, Deep Learning, Ultrasound imaging, ResNet-50, Vision Transformer, Hybrid model, Feature fusion, Transfer learningAbstract
Breast cancer is one of the main causes of female death all over the world, which promotes the significance of its early and precision diagnosis. Conventional interpretation of ultrasound is heavily dependent on the skills of the radiologists, and this is subject to subjectivity and variability of diagnosis. This paper examines how transfer learning can be applied using ResNet-50 architecture and vision transformer model (ViT-B_16) to detect breast cancer using ultrasound. We also propose a new hybrid model in which feature representations of the two networks are integrated at the feature level, and leverage local detail perception capabilities of CNNs along with larger contextual insights of Transformers. The hybrid model highly surpassed the performance of the individual networks recording an accuracy of 98.67% and the values of the precision, recall, and F1-score metrics with an estimated precision of 99%. These findings further demonstrate the possibility of our hybrid deep learning strategy to be used as robust, real-time, and clinically applicable to automated breast cancer detection based on ultrasound imaging.