LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings With Knowledge Distillation | Awesome LLM Papers

LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings With Knowledge Distillation

Zhuoyuan Mao, Tetsuji Nakagawa Β· Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics Β· 2023

Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on TensorFlow Hub.

Similar Work
Loading…