CLASSIFICATION OF CHEST RADIOGRAPHS USING NOVEL ANOMALOUS SALIENCY MAP AND DEEP CONVOLUTIONAL NEURAL NETWORK
DOI:
https://doi.org/10.31436/iiumej.v22i2.1752Keywords:
Saliency mapping, Chest Radiograph, Convolutional Neural NetworkAbstract
The rapid advancement in pattern recognition via the deep learning method has made it possible to develop an autonomous medical image classification system. This system has proven robust and accurate in classifying most pathological features found in a medical image, such as airspace opacity, mass, and broken bone. Conventionally, this system takes routine medical images with minimum pre-processing as the model's input; in this research, we investigate if saliency maps can be an alternative model input. Recent research has shown that saliency maps' application increases deep learning model performance in image classification, object localization, and segmentation. However, conventional bottom-up saliency map algorithms regularly failed to localize salient or pathological anomalies in medical images. This failure is because most medical images are homogenous, lacking color, and contrast variant. Therefore, we also introduce the Xenafas algorithm in this paper. The algorithm creates a new kind of anomalous saliency map called the Intensity Probability Mapping and Weighted Intensity Probability Mapping. We tested the proposed saliency maps on five deep learning models based on common convolutional neural network architecture. The result of this experiment showed that using the proposed saliency map over regular radiograph chest images increases the sensitivity of most models in identifying images with air space opacities. Using the Grad-CAM algorithm, we showed how the proposed saliency map shifted the model attention to the relevant region in chest radiograph images. While in the qualitative study, it was found that the proposed saliency map regularly highlights anomalous features, including foreign objects and cardiomegaly. However, it is inconsistent in highlighting masses and nodules.
ABSTRAK: Perkembangan pesat sistem pengecaman corak menggunakan kaedah pembelajaran mendalam membolehkan penghasilan sistem klasifikasi gambar perubatan secara automatik. Sistem ini berupaya menilai secara tepat jika terdapat tanda-tanda patologi di dalam gambar perubatan seperti kelegapan ruang udara, jisim dan tulang patah. Kebiasaannya, sistem ini akan mengambil gambar perubatan dengan pra-pemprosesan minimum sebagai input. Kajian ini adalah tentang potensi peta salien dapat dijadikan sebagai model input alternatif. Ini kerana kajian terkini telah menunjukkan penggunaan peta salien dapat meningkatkan prestasi model pembelajaran mendalam dalam pengklasifikasian gambar, pengesanan objek, dan segmentasi gambar. Walau bagaimanapun, sistem konvensional algoritma peta salien jenis bawah-ke-atas kebiasaannya gagal mengesan salien atau anomali patologi dalam gambar-gambar perubatan. Kegagalan ini disebabkan oleh sifat gambar perubatan yang homogen, kurang variasi warna dan kontras. Oleh itu, kajian ini memperkenalkan algoritma Xenafas yang menghasilkan dua jenis pemetaan saliensi anomali iaitu Pemetaan Kebarangkalian Keamatan dan Pemetaan Kebarangkalian Keamatan Pemberat. Kajian dibuat pada peta salien yang dicadangkan iaitu pada lima model pembelajaran mendalam berdasarkan seni bina rangkaian neural konvolusi yang sama. Dapatan kajian menunjukkan dengan menggunakan peta salien atas gambar-gambar radiografi dada tetap membantu kesensitifan kebanyakan model dalam mengidentifikasi gambar-gambar dengan kelegapan ruang udara. Dengan menggunakan algoritma Grad-CAM, peta salien yang dicadangkan ini mampu mengalih fokus model kepada kawasan yang relevan kepada gambar radiografi dada. Sementara itu, kajian kualitatif ini juga menunjukkan algoritma yang dicadangkan mampu memberi ciri anomali, termasuk objek asing dan kardiomegali. Walau bagaimanapun, ianya tidak konsisten dalam menjelaskan berat dan nodul.
Downloads
References
Itti L, Koch C, Niebur E. (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 20: 1254-1259. DOI: https://doi.org/10.1109/34.730558
Liu N. and Han J. (2016) Dhsnet: Deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 678-686. DOI: https://doi.org/10.1109/CVPR.2016.80
Rahtu E, Kannala J, Salo M, Heikkilä J. (2010) Segmenting salient objects from images and videos. In European Conference on Computer Vision; pp 366-379. DOI: https://doi.org/10.1007/978-3-642-15555-0_27
Itti L, Koch C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2: 194-203. DOI: https://doi.org/10.1038/35058500
Cheng M-M, Zhang Z, Lin W-Y, Torr P. (2014) BING: Binarized normed gradients for objectness estimation at 300fps. In Proceedings of the IEEE conference on computer vision and pattern recognition; pp 3286-3293. DOI: https://doi.org/10.1109/CVPR.2014.414
Montabone S, Soto A. (2010) Human detection using a mobile platform and novel features derived from a visual saliency mechanism. Image Vis Comput., 28: 391-402 DOI: https://doi.org/10.1016/j.imavis.2009.06.006
Hou X, Zhang L. (2007) Saliency detection: A spectral residual approach. In IEEE Conference on Computer vision and Pattern Recognition; pp 1-8. DOI: https://doi.org/10.1109/CVPR.2007.383267
Borji A, Cheng M-M, Hou Q, Jiang H, Li J. (2019) Salient object detection: A survey. Comput Vis Media; pp 1-34 DOI: https://doi.org/10.1007/s41095-019-0149-9
Castillo JC, Tong Y, Zhao J, Zhu F RSNA Bone-age detection using transfer learning and attention mapping [http://noiselab.ucsd.edu/ECE228_2018/Reports/Report6.pdf]
Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, Duan T, Ding D, Bagul A, Langlotz CP, others. (2018) Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med., 15: e1002686 DOI: https://doi.org/10.1371/journal.pmed.1002686
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 2921o2929. DOI: https://doi.org/10.1109/CVPR.2016.319
Shen L, Margolies LR, Rothstein JH, Fluder E, McBride R, Sieh W. (2019) Deep learning to improve breast cancer detection on screening mammography. Sci. Rep., 9: 12495. DOI: https://doi.org/10.1038/s41598-019-48995-4
Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M. (2015) Striving for simplicity: The all convolutional, arXiv preprint arXiv:1412.6806
Ding Y, Sohn JH, Kawczynski MG, Trivedi H, Harnish R, Jenkins NW, Lituiev D, Copeland TP, Aboian MS, Mari Aparici C, others. (2019) A deep learning model to predict a diagnosis of Alzheimer disease by using 18F-FDG PET of the brain. Radiology, 290: 456o464 DOI: https://doi.org/10.1148/radiol.2018180958
Norman B, Pedoia V, Noworolski A, Link TM, Majumdar S. (2019) Applying densely connected convolutional neural networks for staging osteoarthritis severity from plain radiographs. J. Digit. Imaging, 32: 471-477 DOI: https://doi.org/10.1007/s10278-018-0098-3
Oh K, Kim W, Shen G, Piao Y, Kang N-I, Oh I-S, Chung YC. (2019) Classification of schizophrenia and normal controls using 3D convolutional neural network and outcome visualization. Schizophr Res., 212: 186-195 DOI: https://doi.org/10.1016/j.schres.2019.07.034
Arun NT, Gaw N, Singh P, Chang K, Hoebel KV, Patel J, Gidwani M, Kalpathy-Cramer J. (2020) Assessing the validity of saliency maps for abnormality localization in medical imaging, arXiv preprint arXiv:200600063 Cs
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. (2017) Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision; pp 618-626. DOI: https://doi.org/10.1109/ICCV.2017.74
Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D. (2019) Attention gated networks: Learning to leverage salient regions in medical images, arXiv preprint arXiv:180808114 Cs DOI: https://doi.org/10.1016/j.media.2019.01.012
Pesce E, Withey SJ, Ypsilantis P-P, Bakewell R, Goh V, Montana G. (2019) Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Med Image Anal, 53: 26-38 DOI: https://doi.org/10.1016/j.media.2018.12.007
Deeba F, Bui FM, Wahid KA. (2020) Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control, 55: 101530. DOI: https://doi.org/10.1016/j.bspc.2019.04.007
Fan H, Xie F, Li Y, Jiang Z, Liu J. (2017) Automatic segmentation of dermoscopy images using saliency combined with Otsu threshold. Comput Biol Med, 85: 75-85 DOI: https://doi.org/10.1016/j.compbiomed.2017.03.025
Khan MA, Akram T, Sharif M, Saba T, Javed K, Lali IU, Tanik UJ, Rehman A. (2019) Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc Res Tech., 82: 741-763. DOI: https://doi.org/10.1002/jemt.23220
Rahmat T, Ismail A, Aliman S. (2018) Chest x-rays image classification in medical image analysis. Appl Med Inform., 40: 63-73.
Otsu N. (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern, 9: 62-66. DOI: https://doi.org/10.1109/TSMC.1979.4310076
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:170404861
Huang G, Liu Z, Weinberger KQ. (2016) Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 4700-4708 DOI: https://doi.org/10.1109/CVPR.2017.243
He K, Zhang X, Ren S, Sun J. (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 770-778 DOI: https://doi.org/10.1109/CVPR.2016.90
Simonyan K, Zisserman A. (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556
Chollet F. (2016) Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 1251-1258 DOI: https://doi.org/10.1109/CVPR.2017.195
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition; pp 248-255. DOI: https://doi.org/10.1109/CVPR.2009.5206848
Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. (2017) Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; pp 2097-2106 DOI: https://doi.org/10.1109/CVPR.2017.369
Majkowska A, Mittal S, Steiner DF, Reicher JJ, McKinney SM, Duggan GE, Eswaran K, Cameron Chen P-H, Liu Y, Kalidindi SR, et al. (2020) Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology, 294: 421-431 DOI: https://doi.org/10.1148/radiol.2019191293
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 IIUM Press

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.