ENHANCING TRUST AND EFFICACY IN HEALTHCARE AI: A SYSTEMATIC REVIEW OF MODEL PERFORMANCE AND INTERPRETABILITY WITH HUMAN-COMPUTER INTERACTION AND EXPLAINABLE AI
Keywords:
Human-Computer Interaction (HCI), Explainable Artificial Intelligence (XAI), Healthcare, AI Model Interpretability, Support Vector Classifier (SVC), Model-Agnostic Explanation TechniquesAbstract
For AI solutions to be more reliable and effective, Human-Computer Interaction (HCI) and
Explainable Artificial Intelligence (XAI) must come together in the healthcare industry.
Support Vector Classifier (SVC), Sequential models, Random Forest Classifier, GaussianNB,
and K-Nearest Neighbours (KNN) are just a few of the AI models that are thoroughly reviewed
in this work, with an emphasis on their interpretability and performance. The models with the
best accuracy, SVC (0.8771), were followed by KNN (0.7881), GaussianNB (0.8474), Random
Forest (0.8517), and Sequential models (0.8559). The study highlights the significance of
openness and intuitive AI systems in promoting their adoption by medical professionals.
Model-agnostic explanation methods like as SHAP and LIME are recognised as key
instruments for enhancing interpretability. The review emphasises how important it is to strike
a balance between explainability and AI model performance in order to guarantee trustworthy
and understandable AI applications in healthcare contexts.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.













