🤖 AI Summary
To address the weak representational capacity of concept embeddings in cross-lingual and low-resource settings—particularly their inability to capture fine-grained semantic relations—this paper introduces *partial colexification* into concept modeling for the first time, moving beyond conventional whole-word co-occurrence assumptions to enable subword-level semantic association modeling. We construct a concept graph based on partial colexification patterns and jointly optimize the embedding space using graph neural networks and contrastive learning. Evaluation employs a multi-task framework integrating lexical similarity, semantic change detection, and word association prediction. Experiments demonstrate consistent improvements over strong baselines: +12.7% Pearson correlation on cross-lingual concept similarity prediction, +9.3% accuracy on semantic drift identification, and +8.5% F1 score on word association matching. The approach significantly enhances discriminability among diverse semantic relations between concepts and improves cross-lingual transferability of concept representations.
📝 Abstract
While the embedding of words has revolutionized the field of Natural Language Processing, the embedding of concepts has received much less attention so far. A dense and meaningful representation of concepts, however, could prove useful for several tasks in computational linguistics, especially those involving cross-linguistic data or sparse data from low resource languages. First methods that have been proposed so far embed concepts from automatically constructed colexification networks. While these approaches depart from automatically inferred polysemies, attested across a larger number of languages, they are restricted to the word level, ignoring lexical relations that would only hold for parts of the words in a given language. Building on recently introduced methods for the inference of partial colexifications, we show how they can be used to improve concept embeddings in meaningful ways. The learned embeddings are evaluated against lexical similarity ratings, recorded instances of semantic shift, and word association data. We show that in all evaluation tasks, the inclusion of partial colexifications lead to improved concept representations and better results. Our results further show that the learned embeddings are able to capture and represent different semantic relationships between concepts.