An interpretable deep learning framework for medical diagnosis using spectrogram analysis

Shagufta Henna, Juan Miguel Lopez Alcaraz, Upaka Rathnayake, Mohamed Amjath

Research output: Contribution to journalArticlepeer-review

Abstract

Convolutional Neural Networks (CNNs) are widely utilized for their robust feature extraction capabilities, particularly in medical classification tasks. However, their opaque decision-making process presents challenges in clinical settings, where interpretability and trust are paramount. This study investigates the explainability of a custom CNN model developed for Covid-19 and non-Covid-19 classification using dry cough spectrograms, with a focus on interpreting filter-level representations and decision pathways. To improve model transparency, we apply a suite of explainable artificial intelligence (XAI) techniques, including feature visualizations, SmoothGrad, Grad-CAM, and LIME, which explain the relevance of spectro-temporal features in the classification process. Furthermore, we conduct a comparative analysis with a pre-trained MobileNetV2 model using Guided Grad-CAM and Integrated Gradients. The results indicate that while MobileNetV2 yields some degree of visual attribution, its explanations, particularly for Covid-19 predictions are diffuse and inconsistent, limiting their interpretability. In contrast, the custom CNN model exhibits more coherent and class-specific activation patterns, offering improved localization of diagnostically relevant features.

Original languageEnglish
Article number100408
JournalHealthcare Analytics
Volume8
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Deep learning
  • Feature interpretation
  • Healthcare classification
  • Medical prediction
  • Neural network analysis
  • Pattern recognition

Fingerprint

Dive into the research topics of 'An interpretable deep learning framework for medical diagnosis using spectrogram analysis'. Together they form a unique fingerprint.

Cite this