Explainable Artificial Intelligence (XAI)-Based Intrusion Detection System
DOI:
https://doi.org/10.62643/Keywords:
Intrusion Detection, LSTM, Explainable AI, SHAP, LIME, Network Security, CIC-IDS2017Abstract
Intrusion Detection Systems (IDS) are critical for protecting computer networks from cyber threats. Traditional signature-based detection systems are inadequate against novel and evolving attacks. This paper proposes an Explainable AI-based IDS using a Long Short-Term Memory (LSTM) neural network trained on the CIC-IDS2017 dataset. The model classifies network traffic as normal or malicious with high accuracy. To enhance transparency, SHAP and LIME explainability techniques are integrated, providing both global feature importance and local prediction explanations. The system is deployed using Django, where users input network features and receive real-time predictions with confidence scores and visual explanation graphs. Experimental results demonstrate 96.8% detection accuracy with an AUC of 0.98, while SHAP analysis reveals that flow duration, packet length, and flag counts are the most influential features for intrusion detection.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.













