🤖 AI Summary
This work addresses the lack of basic conversational capabilities in large language models (LLMs) for extremely low-resource languages such as Tulu. The authors propose a tuning-free prompt engineering approach that integrates structured prompts incorporating grammatical documentation, negative constraints, romanization normalization, and synthetic data generated via self-play. Without any Tulu-specific training data, this method effectively elicits latent linguistic capabilities from LLMs. Experiments across three prominent models demonstrate a dramatic reduction in lexical contamination—from 80% to 5%—and achieve 85% grammatical accuracy. Negative constraints consistently improve performance by 12–18 percentage points, while the efficacy of grammatical prompting varies significantly with model architecture, yielding gains of 8–22 percentage points. This study provides the first empirical validation of the feasibility and effectiveness of purely prompt-based strategies for extremely low-resource languages.
📝 Abstract
Can large language models converse in languages virtually absent from their training data? We investigate this question through a case study on Tulu, a Dravidian language with over 2 million speakers but minimal digital presence. Rather than fine-tuning an LLM, we examine whether structured prompts alone can elicit basic conversational ability under controlled prompting. We systematically tackle various challenges posed by absence of training data for Tulu by combining explicit grammar documentation, negative constraints to suppress high-probability tokens from related languages, romanization standardization, and quality-controlled synthetic data generation via self-play. Evaluated on a manually curated held-out set across three LLMs (Gemini 2.0 Flash, GPT-4o, Llama 3.1 70B) and validated by native speakers, our approach reduces vocabulary contamination from 80% to 5% while achieving 85% grammatical accuracy. Cross-model analysis reveals that negative constraints provide consistent improvements (12--18 percentage points), while grammar documentation effects vary by model architecture (8--22 points).