Sign Language Recognition


Sign language recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.

Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition

Add code
Jul 08, 2024
Figure 1 for Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition
Figure 2 for Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition
Figure 3 for Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition
Figure 4 for Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition
Viaarxiv icon

FSboard: Over 3 million characters of ASL fingerspelling collected via smartphones

Add code
Jul 22, 2024
Viaarxiv icon

CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation

Add code
Apr 17, 2024
Figure 1 for CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
Figure 2 for CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
Figure 3 for CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
Figure 4 for CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
Viaarxiv icon

Optimizing Hand Region Detection in MediaPipe Holistic Full-Body Pose Estimation to Improve Accuracy and Avoid Downstream Errors

Add code
May 06, 2024
Viaarxiv icon

Transfer Learning for Cross-dataset Isolated Sign Language Recognition in Under-Resourced Datasets

Add code
Mar 21, 2024
Figure 1 for Transfer Learning for Cross-dataset Isolated Sign Language Recognition in Under-Resourced Datasets
Figure 2 for Transfer Learning for Cross-dataset Isolated Sign Language Recognition in Under-Resourced Datasets
Figure 3 for Transfer Learning for Cross-dataset Isolated Sign Language Recognition in Under-Resourced Datasets
Figure 4 for Transfer Learning for Cross-dataset Isolated Sign Language Recognition in Under-Resourced Datasets
Viaarxiv icon

An Advanced Deep Learning Based Three-Stream Hybrid Model for Dynamic Hand Gesture Recognition

Add code
Aug 15, 2024
Viaarxiv icon

Improving Continuous Sign Language Recognition with Adapted Image Models

Add code
Apr 12, 2024
Figure 1 for Improving Continuous Sign Language Recognition with Adapted Image Models
Figure 2 for Improving Continuous Sign Language Recognition with Adapted Image Models
Figure 3 for Improving Continuous Sign Language Recognition with Adapted Image Models
Figure 4 for Improving Continuous Sign Language Recognition with Adapted Image Models
Viaarxiv icon

A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News

Add code
May 02, 2024
Figure 1 for A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News
Figure 2 for A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News
Figure 3 for A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News
Figure 4 for A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News
Viaarxiv icon

TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions

Add code
Mar 18, 2024
Viaarxiv icon

Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition

Add code
Mar 19, 2024
Figure 1 for Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition
Figure 2 for Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition
Figure 3 for Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition
Figure 4 for Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition
Viaarxiv icon