Paper
Publication
Analogy Training Multilingual Encoders
Abstract

Language encoders encode words and phrases in ways that capture their local semantic relatedness, but are known to be globally inconsistent. Global inconsistency can seemingly be corrected for, in part, by leveraging signals from knowledge bases, but previous results are partial and limited to monolingual English encoders. We extract a large-scale multilingual, multi-word analogy dataset from Wikidata for diagnosing and correcting for global inconsistencies and implement a four-way Siamese BERT architecture for grounding mBERT in Wikidata through analogy training. We show that analogy training not only improves the global consistency of mBERT, as well as the isospectrality of language-specific subspaces, but also leads to overall downstream improvements on XNLI, a multilingual benchmark for natural language inference models.

Authors' notes