SIGN LANGUAGE RECOGNITION USING CNN AND HAND GESTURES TRACKING
DOI:
https://doi.org/10.62643/Keywords:
Sign Language Recognition, Hand Gesture Recognition, 3D Convolutional Neural Network (3D CNN), Spatio-Temporal Feature Extraction, Computer Vision, Depth Sensing, HumanComputer Interaction.Abstract
Automatic hand gesture recognition from camera images is a compelling research area for developing intelligent vision systems. Sign language serves as the primary communication medium for individuals who are unable to speak or hear, enabling physically challenged people to express their thoughts and emotions effectively. In this work, we propose a novel scheme for sign language recognition aimed at accurately identifying hand gestures. Leveraging computer vision and neural networks, our system can detect gestures and convert them into corresponding text output. To tackle this problem, we introduce a 3D convolutional neural network (CNN) that automatically extracts discriminative spatio-temporal features from raw video streams without requiring handcrafted feature design. To enhance performance, multi-channel video streams—including color information, depth cues, and body joint positions—are used as input to the 3D CNN, allowing the integration of color, depth, and trajectory information for robust gesture recognition.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.












