🤖 AI Summary
Existing large language models (LLMs) underperform on English–Spanish code-switching (CS) text generation, primarily due to the scarcity of authentic, large-scale CS training data.
Method: We propose a data construction pipeline that leverages naturally occurring English–Spanish CS sentences as seeds, augments them via back-translation to build high-quality parallel CS corpora, and employs supervised fine-tuning of LLMs for controllable monolingual-to-CS generation. Our approach departs from traditional grammar-driven paradigms by explicitly modeling real-world sociolinguistic CS distributions.
Contribution/Results: We empirically demonstrate that our method significantly improves fluency and naturalness of generated CS text. Crucially, we reveal substantial misalignment between mainstream automatic metrics (BLEU, chrF, COMET) and human preferences in CS quality evaluation—a previously unreported finding. We publicly release the first high-quality English–Spanish CS benchmark dataset and associated code, establishing a new paradigm and evaluation standard for low-resource CS modeling and multilingual interactive systems.
📝 Abstract
Code-switching (CS) is still a critical challenge in Natural Language Processing (NLP). Current Large Language Models (LLMs) struggle to interpret and generate code-switched text, primarily due to the scarcity of large-scale CS datasets for training. This paper presents a novel methodology to generate CS data using LLMs, and test it on the English-Spanish language pair. We propose back-translating natural CS sentences into monolingual English, and using the resulting parallel corpus to fine-tune LLMs to turn monolingual sentences into CS. Unlike previous approaches to CS generation, our methodology uses natural CS data as a starting point, allowing models to learn its natural distribution beyond grammatical patterns. We thoroughly analyse the models' performance through a study on human preferences, a qualitative error analysis and an evaluation with popular automatic metrics. Results show that our methodology generates fluent code-switched text, expanding research opportunities in CS communication, and that traditional metrics do not correlate with human judgement when assessing the quality of the generated CS data. We release our code and generated dataset under a CC-BY-NC-SA license.