Sign language serves as a vital communication method for individuals with hearing impairments, relying on hand movements and gestures to convey meaning. For centuries, it has enabled interaction for people with hearing and speech disabilities. However, despite its historical significance, many individuals in society struggle to interpret these signs, creating a communication barrier with the deaf and mute community. This paper proposes a deep learning-based system specifically designed to recognize Vietnamese Sign Language (VSL) gestures. The dataset developed includes 23 alphabet signs and 2 accent marks unique to VSL, with 22 of the alphabet signs resembling those in English. The proposed system achieves an accuracy exceeding 91% on the raw dataset and 95% on the processed dataset.
Keyword
Vietnamese Sign Language, Mediapipe, keypoint, image processing, deep learning.