APPLICATIONS OF EXPLAINABLE ARTIFICIAL INTELLIGENCE IN PHARMACOVIGILANCE: ADVANCING PATIENT SAFETY
DOI:
https://doi.org/10.62643/ijerst.2025.v21.i2.pp654-665Abstract
Recently, a number of sectors have emphasised the need of Explainable AI (XAI), an approach that supplements the black box of artificial intelligence. Finding studies in the realm of pharmacovigilance utilising XAI is the aim of this study. Only 25 of the 781 papers that were carefully verified fit the selection requirements, despite several prior efforts to choose articles. An intuitive overview of XAI technologies' potential in pharmacovigilance is provided in this article. The included research examined drug treatment, side effects, and interaction studies based on tree models, neural network models, and graph models using clinical data, registry data, and knowledge data. Ultimately, a number of study concerns pertaining to the use of XAI in pharmacovigilance were shown to have significant obstacles. XAI is not often employed, despite the fact that artificial intelligence (AI) is actively used in patient safety and drug monitoring, collecting adverse drug response data, extracting medication-drug interactions, and forecasting effects. Thus, there should be constant discussion of the possible difficulties in using it as well as the opportunities for the future.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.