ENHANCING TRUST AND EFFICACY IN HEALTHCARE AI: A SYSTEMATIC REVIEW OF MODEL PERFORMANCE AND INTERPRETABILITY WITH HUMAN-COMPUTER INTERACTION AND EXPLAINABLE AI

Authors

  • Mohanarangan Veerappermal Devarajan Author

Keywords:

Human-Computer Interaction (HCI), Explainable Artificial Intelligence (XAI), Healthcare, AI Model Interpretability, Support Vector Classifier (SVC), Model-Agnostic  Explanation Techniques

Abstract

For AI solutions to be more reliable and effective, Human-Computer Interaction (HCI) and 
Explainable Artificial Intelligence (XAI) must come together in the healthcare industry. 
Support Vector Classifier (SVC), Sequential models, Random Forest Classifier, GaussianNB, 
and K-Nearest Neighbours (KNN) are just a few of the AI models that are thoroughly reviewed 
in this work, with an emphasis on their interpretability and performance. The models with the 
best accuracy, SVC (0.8771), were followed by KNN (0.7881), GaussianNB (0.8474), Random 
Forest (0.8517), and Sequential models (0.8559). The study highlights the significance of 
openness and intuitive AI systems in promoting their adoption by medical professionals. 
Model-agnostic explanation methods like as SHAP and LIME are recognised as key 
instruments for enhancing interpretability. The review emphasises how important it is to strike 
a balance between explainability and AI model performance in order to guarantee trustworthy 
and understandable AI applications in healthcare contexts. 

Downloads

Published

12-12-2023

How to Cite

ENHANCING TRUST AND EFFICACY IN HEALTHCARE AI: A SYSTEMATIC REVIEW OF MODEL PERFORMANCE AND INTERPRETABILITY WITH HUMAN-COMPUTER INTERACTION AND EXPLAINABLE AI . (2023). International Journal of Engineering Research and Science & Technology, 19(4), 9-31. https://ijerst.org/index.php/ijerst/article/view/197