🤖 AI Summary
Nahuatl—a nationally endangered language in Mexico—suffers from a severe scarcity of digital resources and lacks grammatically correct, LLM-trainable corpora. Method: We propose the first dual context-free grammar (CFG)-driven syntactic generation framework to automatically construct large-scale, high-quality artificial sentence corpora specifically designed for learning non-contextual word and sentence embeddings. Our approach integrates linguistic constraints with rule-based generation to substantially improve grammatical accuracy and structural coverage. Results: Embedding models trained on our generated corpus achieve substantial gains in semantic similarity tasks over original baselines; even lightweight non-contextual embeddings outperform several mainstream large language models. This work establishes a reproducible, scalable paradigm for embedding learning in low-resource π-shaped languages.
📝 Abstract
The aim of this article is to introduce two Context-Free Grammars (CFG) for Nawatl Corpora expansion. Nawatl is an Amerindian language (it is a National Language of Mexico) of the $π$-language type, i.e. a language with few digital resources. For this reason the corpora available for the learning of Large Language Models (LLMs) are virtually non-existent, posing a significant challenge. The goal is to produce a substantial number of syntactically valid artificial Nawatl sentences and thereby to expand the corpora for the purpose of learning non contextual embeddings. For this objective, we introduce two new Nawatl CFGs and use them in generative mode. Using these grammars, it is possible to expand Nawatl corpus significantly and subsequently to use it to learn embeddings and to evaluate their relevance in a sentences semantic similarity task. The results show an improvement compared to the results obtained using only the original corpus without artificial expansion, and also demonstrate that economic embeddings often perform better than some LLMs.