Smart Traffic Signal Control Using Deep Reinforcement Learning and Real-Time Video Analytics
DOI:
https://doi.org/10.62643/ijerst.2026.v22.n1(2).pp214-218Keywords:
Adaptive Traffic Signal Control; YOLOv8; Deep Reinforcement Learning; Deep Q-Network; Dual Targeting Algorithm; Polygon Zone Detection; Indian Traffic DatasetAbstract
Conventional traffic signal systems rely on preprogrammed fixed-timer cycles that assign green phases regardless of actual road congestion, causing measurable urban delay. This paper presents an adaptive traffic signal control system combining real-time YOLOv8 vehicle detection with a Deep Q-Network (DQN) agent trained using the Dual Targeting Algorithm (DTA). A custom Indian traffic dataset of three vehicle classes (bike, bus, car) trains the detector to achieve [email protected] of 0.87. Four directional polygon zones (North, South, East, West) map the physical junction to software zones and compute normalised congestion levels (Dt) every five seconds. A 12-dimensional state vector encodes congestion levels, congestion trends, and active phase information. The DRL agent learns to assign green time to the most congested lane using a shaped reward that penalises unnecessary phase switching. Experiments on a 138-second real aerial traffic video (27 decision steps) show the DTA agent achieves an average congestion of 0.187, compared to 0.210 for a greedy baseline and 0.280 for fixed-timer control, representing a 33.2% improvement over the conventional approach. Results are consistent with benchmarks reported by Kodama et al. (IEEE Access, 2022).
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.













