Interpretable AI for Stroke Prediction: A Structured Approach Using Explainable AI Techniques

Authors

  • Lazeena Ranak Department of Computer Science, International Islamic University Malaysia, Kuala Lumpur, Malaysia
  • Sharyar Wani Department of Computer Science, International Islamic University Malaysia, Kuala Lumpur, Malaysia

Keywords:

explainable artificial intelligence (xai), stroke prediction, interpretable machine learning, sci-xai pipeline, clinical decision-making

Abstract

The lack of interpretability in AI-driven healthcare diagnostics poses a significant challenge to clinical adoption. This study explores the methodological integration of explainable artificial intelligence (XAI) tools using open clinical prediction dataset, the SCI-XAI pipeline, for stroke risk prediction. We apply multiple machine learning models ranging from white-box approaches (Logistic Regression, Decision Tree, Explainable Boosting Machine) to black-box models (Random Forest, XGBoost, LightGBM, and Multi-Layer Perceptron) and evaluate their trade-offs between predictive accuracy and explainability using techniques such as SHAP, LIME, and ELI5. The study uses a systematic approach involving pre-modeling, modeling, and post-modeling phases, aiming to improve model interpretability for potential use in clinical decision-support contexts. The experimental results show that ensemble models achieve superior accuracy, while traditional models provide inherent transparency. However, the SCI-XAI framework demonstrated that post-hoc explainability tools can extend such transparency to complex models. SHAP-based feature im-portance analysis identifies age, glucose levels, and BMI as the most influential predictors of stroke. The integration of structured explainability into AI based diagnostics helps bridge the gap between algorithmic prediction and clinical interpretability, offering a methodological foundation for more transparent decision-support systems.

References

H. O’Brien Quinn, M. Sedky, J. Francis, and M. Streeton, “Literature Review of Explainable Tabular Data Analysis,” Electronics (Basel), vol. 13, no. 19, p. 3806, Sep. 2024, doi: 10.3390/electronics13193806.

A. M. Antoniadi et al., “Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review,” Applied Sciences, vol. 11, no. 11, 2021, doi: 10.3390/app11115088.

P. A. Moreno-Sanchez, “An automated feature selection and classification pipeline to improve explainability of clinical prediction models,” in Proceedings - 2021 IEEE 9th International Conference on Healthcare Informatics, ISCHI 2021, Institute of Electrical and Electronics Engineers Inc., Aug. 2021, pp. 527–534. doi: 10.1109/ICHI52183.2021.00100.

U. Pawar, D. O’shea, S. Rea, and R. O’reilly, “Incorporating Explainable Artificial Intelligence (XAI) to aid the Understanding of Machine Learning in the Healthcare Domain.”

J. Ospel, N. Singh, A. Ganesh, and M. Goyal, “Sex and Gender Differences in Stroke and Their Practical Implications in Acute Care,” Jan. 01, 2023, Korean Stroke Society. doi: 10.5853/jos.2022.04077.

K. M. Rexrode, T. E. Madsen, A. Y. X. Yu, C. Carcel, J. H. Lichtman, and E. C. Miller, “The Impact of Sex and Gender on Stroke,” Circ Res, vol. 130, no. 4, pp. 512–528, Feb. 2022, doi: 10.1161/CIRCRESAHA.121.319915.

M. Wajngarten and G. Sampaio Silva, “Hypertension and stroke: Update on treatment,” European Cardiology Review , vol. 14, no. 2, pp. 111–115, 2019, doi: 10.15420/ecr.2019.11.1.

W. Kim and E. J. Kim, “Heart failure as a risk factor for stroke,” Jan. 01, 2018, Korean Stroke Society. doi: 10.5853/jos.2017.02810.

C. Zhu et al., “The association of marital/partner status with patient-reported health outcomes following acute myocardial infarction or stroke: Protocol for a systematic review and meta-analysis,” Nov. 01, 2022, Public Library of Science. doi: 10.1371/journal.pone.0267771.

E. S. Eshak et al., “Changes in the Employment Status and Risk of Stroke and Stroke Types,” Stroke, vol. 48, no. 5, pp. 1176–1182, May 2017, doi: 10.1161/STROKEAHA.117.016967.

O. Grimaud et al., “Stroke incidence and case fatality according to rural or urban residence results from the French Brest Stroke Registry,” Stroke, vol. 50, no. 10, pp. 2661–2667, Oct. 2019, doi: 10.1161/STROKEAHA.118.024695.

X. Peng et al., “Longitudinal Average Glucose Levels and Variance and Risk of Stroke: A Chinese Cohort Study,” Int J Hypertens, vol. 2020, 2020, doi: 10.1155/2020/8953058.

K. Miwa et al., “Clinical impact of body mass index on outcomes of ischemic and hemorrhagic strokes,” International Journal of Stroke, Oct. 2024, doi: 10.1177/17474930241249370.

J. Chen et al., “Impact of Smoking Status on Stroke Recurrence,” J Am Heart Assoc, vol. 8, no. 8, Apr. 2019, doi: 10.1161/JAHA.118.011696.

M. S. Islam, I. Hussain, M. M. Rahman, S. J. Park, and M. A. Hossain, “Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal,” Sensors, vol. 22, no. 24, Dec. 2022, doi: 10.3390/s22249859.

A. Laios et al., “Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer,” Cancers (Basel), vol. 14, no. 14, Jul. 2022, doi: 10.3390/cancers14143447.

S. K. Mandala, “XAI Renaissance: Redefining Interpretability in Medical Diagnostic Models,” Jun. 2023, [Online]. Available: http://arxiv.org/abs/2306.01668

V. Petrauskas et al., “XAI-based Medical Decision Support System Model,” International Journal of Scientific and Research Publications (IJSRP), vol. 10, no. 12, pp. 598–607, Dec. 2020, doi: 10.29322/ijsrp.10.12.2020.p10869.

T. A. J. Schoonderwoerd, W. Jorritsma, M. A. Neerincx, and K. van den Bosch, “Human-centered XAI: Developing design patterns for explanations of clinical decision support systems,” International Journal of Human Computer Studies, vol. 154, Oct. 2021, doi: 10.1016/j.ijhcs.2021.102684.

J. Stodt, M. Madan, C. Reich, L. Filipovic, and T. Ilijas, “A Study on the Reliability of Visual XAI Methods for X-Ray Images,” in Studies in Health Technology and Informatics, IOS Press BV, Jun. 2023, pp. 32–35. doi: 10.3233/SHTI230416.

S. Alkhalaf et al., “Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging,” Cancers (Basel), vol. 15, no. 5, Mar. 2023, doi: 10.3390/cancers15051492.

R. El Shawi, Y. Sherif, M. Al-Mallah, and S. Sakr, “Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques,” in 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Jun. 2019, pp. 275–280. doi: 10.1109/CBMS.2019.00065.

S. S, K. Chadaga, N. Sampathila, S. Prabhu, R. Chadaga, and S. K. S, “Multiple Explainable Approaches to Predict the Risk of Stroke Using Artificial Intelligence,” Information, vol. 14, no. 8, p. 435, Aug. 2023, doi: 10.3390/info14080435.

B. S. D. Darshan et al., “Differential diagnosis of iron deficiency anemia from aplastic anemia using machine learning and explainable Artificial Intelligence utilizing blood attributes,” Sci Rep, vol. 15, no. 1, p. 505, Jan. 2025, doi: 10.1038/s41598-024-84120-w.

B. Khokhar, V. Pentangelo, F. Palomba, and C. Gravino, “Towards Transparent and Accurate Diabetes Prediction Using Machine Learning and Explainable Artificial Intelligence.” [Online]. Available: https://www.kaggle.com/datasets/

S. Lolak, J. Attia, G. J. McKay, and A. Thakkinstian, “Comparing Explainable Machine Learning Approaches With Traditional Statistical Methods for Evaluating Stroke Risk Models: Retrospective Cohort Study,” JMIR Cardio, vol. 7, 2023, doi: 10.2196/47736.

S. Kruschel, N. Hambauer, S. Weinzierl, S. Zilker, M. Kraus, and P. Zschech, “Challenging the Performance-Interpretability Trade-Off: An Evaluation of Interpretable Machine Learning Models,” Business and Information Systems Engineering, 2025, doi: 10.1007/s12599-024-00922-2.

Downloads

Published

30-01-2026

How to Cite

Ranak, L., & Wani, S. (2026). Interpretable AI for Stroke Prediction: A Structured Approach Using Explainable AI Techniques. International Journal on Perceptive and Cognitive Computing, 12(1), 102–118. Retrieved from https://journals.iium.edu.my/kict/index.php/IJPCC/article/view/636