A persistent challenge in sign language video processing, including the task of sign language to written language translation, is how we learn representations of sign language in an effective and efficient way that can preserve the important attributes of these languages, while remaining invariant to irrelevant visual differences. Informed by the nature and linguistics of signed languages, our proposed method focuses on just the most relevant parts in a signing video: the face, hands and body posture of the signer. However, instead of using pose estimation coordinates from off-the-shelf pose tracking models, which have inconsistent performance for hands and faces, we propose to learn the complex handshapes and rich facial expressions of sign languages in a self-supervised fashion. Our approach is based on learning from individual frames (rather than video sequences) and is therefore much more efficient than prior work on sign language pre-training. Compared to a recent model that established a new state of the art in sign language translation on the How2Sign dataset, our approach yields similar translation performance, using less than 3% of the compute.
Comparison of data and computation usage between SignMusketeers (Ours) and Rust et al. Horizontal axis: GPU-Hours for the entire training schedule i.e., self-supervised training and supervised training. Vertical axis: BLEU score. Bubble size: number of frames (in millions) used during the pre-training stage. Labels: the first line is the pre-training protocol and the second line is the supervised training protocol. The number in parentheses is the number of pre-training epochs.
YT: YouTube-ASL, H2S: How2Sign; X\(\to\)Y means train on X then fine-tune on Y; X+Y means train on the union X\(\cup\)Y.
@article{gueuwou2025signmusketeers,
title={Signmusketeers: An efficient multi-stream approach for sign language translation at scale},
author={Gueuwou, Shester and Du, Xiaodan and Shakhnarovich, Greg and Livescu, Karen},
journal={Findings of the Association for Computational Linguistics: ACL 2025},
year={2025}
}