A novel machine learning approach for diagnosing diabetes with a self-explainable interface

Gangani Dharmarathne, Thilini N. Jayasinghe, Madhusha Bogahawaththa, D. P.P. Meddage, Upaka Rathnayake

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

This study introduces the first-ever self-explanatory interface for diagnosing diabetes patients using machine learning. We propose four classification models (Decision Tree (DT), K-nearest Neighbor (KNN), Support Vector Classification (SVC), and Extreme Gradient Boosting (XGB)) based on the publicly available diabetes dataset. To elucidate the inner workings of these models, we employed the machine learning interpretation method known as Shapley Additive Explanations (SHAP). All the models exhibited commendable accuracy in diagnosing patients with diabetes, with the XGB model showing a slight edge over the others. Utilising SHAP, we delved into the XGB model, providing in-depth insights into the reasoning behind its predictions at a granular level. Subsequently, we integrated the XGB model and SHAP's local explanations into an interface to predict diabetes in patients. This interface serves a critical role as it diagnoses patients and offers transparent explanations for the decisions made, providing users with a heightened awareness of their current health conditions. Given the high-stakes nature of the medical field, this developed interface can be further enhanced by including more extensive clinical data, ultimately aiding medical professionals in their decision-making processes.

Original languageEnglish
Article number100301
JournalHealthcare Analytics
Volume5
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Diabetes
  • Diagnosis
  • Healthcare
  • Machine learning
  • Predictive analytics
  • Self-explainable interface

Fingerprint

Dive into the research topics of 'A novel machine learning approach for diagnosing diabetes with a self-explainable interface'. Together they form a unique fingerprint.

Cite this