SIGN LANGUAGE RECOGNITION USING CNN AND HAND GESTURES TRACKING

Authors

  • K.MADHURIMA Author
  • M.MANEESHA Author

DOI:

https://doi.org/10.62643/

Keywords:

Sign Language Recognition, Hand Gesture Recognition, 3D Convolutional Neural Network (3D CNN), Spatio-Temporal Feature Extraction, Computer Vision, Depth Sensing, HumanComputer Interaction.

Abstract

Automatic hand gesture recognition from camera images is a compelling research area for developing intelligent vision systems. Sign language serves as the primary communication medium for individuals who are unable to speak or hear, enabling physically challenged people to express their thoughts and emotions effectively. In this work, we propose a novel scheme for sign language recognition aimed at accurately identifying hand gestures. Leveraging computer vision and neural networks, our system can detect gestures and convert them into corresponding text output. To tackle this problem, we introduce a 3D convolutional neural network (CNN) that automatically extracts discriminative spatio-temporal features from raw video streams without requiring handcrafted feature design. To enhance performance, multi-channel video streams—including color information, depth cues, and body joint positions—are used as input to the 3D CNN, allowing the integration of color, depth, and trajectory information for robust gesture recognition.

Downloads

Published

29-10-2025

How to Cite

SIGN LANGUAGE RECOGNITION USING CNN AND HAND GESTURES TRACKING. (2025). International Journal of Engineering Research and Science & Technology, 21(4), 341-345. https://doi.org/10.62643/