SignMusketeers: An Efficient Multi-Stream Approach for Sign Language Translation at Scale

1Toyota Technological Institute at Chicago,
Teaser

Overview of our approach to sign language translation.

We parse every frame of the signing video with off-the-shelf face and hand detectors.
(a) In phase 1 (left) we start from pre-trained DINOv2 visual feature extractors and continue training them with a DINO loss on cropped face boxes and hand boxes, producing two separate DINOv2s (DINOv2-F for the face and DINOv2-H for the hands). This stage is purely self-supervised from random video frames.
(b) In phase 2 (right), fixing the two pre-trained feature extractors, we add a (learned) feature extractor for coarse body pose estimated by an off-the-shelf method, concatenate and project the features for each frame, and fine-tune a T5 model mapping the resulting sequence of frame features to English text. This stage is supervised by video clips paired with translations.

Abstract

A persistent challenge in sign language video processing, including the task of sign language to written language translation, is how we learn representations of sign language in an effective and efficient way that can preserve the important attributes of these languages, while remaining invariant to irrelevant visual differences. Informed by the nature and linguistics of signed languages, our proposed method focuses on just the most relevant parts in a signing video: the face, hands and body posture of the signer. However, instead of using pose estimation coordinates from off-the-shelf pose tracking models, which have inconsistent performance for hands and faces, we propose to learn the complex handshapes and rich facial expressions of sign languages in a self-supervised fashion. Our approach is based on learning from individual frames (rather than video sequences) and is therefore much more efficient than prior work on sign language pre-training. Compared to a recent model that established a new state of the art in sign language translation on the How2Sign dataset, our approach yields similar translation performance, using less than 3% of the compute.

bubble graph

Comparison of data and computation usage between SignMusketeers (Ours) and Rust et al. Horizontal axis: GPU-Hours for the entire training schedule i.e., self-supervised training and supervised training. Vertical axis: BLEU score. Bubble size: number of frames (in millions) used during the pre-training stage. Labels: the first line is the pre-training protocol and the second line is the supervised training protocol. The number in parentheses is the number of pre-training epochs.

YT: YouTube-ASL, H2S: How2Sign; X\(\to\)Y means train on X then fine-tune on Y; X+Y means train on the union X\(\cup\)Y.

BibTeX

@article{gueuwou2025signmusketeers,
      title={Signmusketeers: An efficient multi-stream approach for sign language translation at scale},
      author={Gueuwou, Shester and Du, Xiaodan and Shakhnarovich, Greg and Livescu, Karen},
      journal={Findings of the Association for Computational Linguistics: ACL 2025},
      year={2025}
}