Cross-lingual sentence embedding using multi-task learning
View/ Open
Date
2021-11-07Author
Goswami, Koustava
Dutta, Sourav
Assem, Haytham
Fransen, Theodorus
McCrae, John P.
Metadata
Show full item recordUsage
This item's downloads: 34 (view details)
Recommended Citation
Goswami, Koustava, Dutta, Sourav, Assem, Haytham, Fransen, Theodorus, & McCrae, John P. (2021). Cross-lingual sentence embedding using multi-task learning. Paper presented at the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 07-11 November, https://dx.doi.org/10.18653/v1/2021.emnlp-main.716
Published Version
Abstract
Multilingual sentence embeddings capture rich semantic information not only for measuring similarity between texts but also for catering to a broad range of downstream cross-lingual NLP tasks. State-of-the-art multilingual sentence embedding models require large parallel corpora to learn efficiently, which confines the scope of these models. In this paper, we propose a novel sentence embedding framework based on an unsupervised loss function for generating effective multilingual sentence embeddings, eliminating the need for parallel corpora. We capture semantic similarity and relatedness between sentences using a multi-task loss function for training a dual encoder model mapping different languages onto the same vector space. We demonstrate the efficacy of an unsupervised as well as a weakly supervised variant of our framework on STS, BUCC and Tatoeba benchmark tasks. The proposed unsupervised sentence embedding framework outperforms even supervised state-of-the-art methods for certain under-resourced languages on the Tatoeba dataset and on a monolingual benchmark. Further, we show enhanced zero-shot learning capabilities for more than 30 languages, with the model being trained on only 13 languages. Our model can be extended to a wide range of languages from any language family, as it overcomes the requirement of parallel corpora for training.