REAL-TIME LIVE EVENT DETECTION FOR PUBLIC SAFETY USING DEEP LEARNING AND NLPv
DOI:
https://doi.org/10.62643/ijerst.2025.v21.i2.pp1016-1025Abstract
Because of the growing size of crowds and associated hazards, maintaining public safety during live events has become a crucial concern in recent years. With the use of auditory data and the LightGBM classifier, this work suggests a unique method for live event detection for people's safety. Real-time audio feeds are used by the system to detect abnormalities that can point to possible safety risks, such as loud noises, explosions, or odd crowd behaviour. Advanced signal processing methods, such as spectral contrast, chroma characteristics, and Mel-frequency cepstral coefficients (MFCCs), are used to extract audio information. These characteristics are supplied into a LightGBM classifier, which offers reliable and effective performance for classifying event types and possible hazards in real time. To guarantee a thorough grasp of normal and aberrant patterns, the suggested technique is assessed using a variety of datasets that include audio samples from live events, such as concerts, sporting events, and emergency situations. The LightGBM model is appropriate for use in real-time applications because to its high precision, low latency, and scalability. For ongoing model refinement based on fresh audio input, the system also incorporates a feedback loop. The results demonstrate how the system may improve situational awareness and proactively notify authorities of possible threats, guaranteeing prompt actions. This strategy is a big step towards using acoustic analytics and machine learning to increase public safety at live events.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.