🤖 AI Summary
Systematic validation of instruction-following and human preference alignment for large language models (LLMs) in low-resource languages—such as Finnish—remains lacking.
Method: We propose an English–Finnish bilingual collaborative fine-tuning paradigm: leveraging multilingual LLMs to translate and construct the first high-quality Finnish instruction-following and preference dataset (hundreds of samples), followed by bilingual joint supervised fine-tuning (SFT) and preference optimization (DPO/RLHF).
Contribution/Results: We empirically demonstrate—for the first time—that extremely limited Finnish data suffices to achieve instruction-following performance on par with high-resource baselines. Crucially, cross-lingual preference alignment necessitates bilingual joint training, not monolingual adaptation. We open-source the Poro-34B-chat model, the bilingual dataset, and the full training pipeline, establishing a reproducible methodology and foundational infrastructure for alignment research in low-resource languages.
📝 Abstract
As LLMs gain more popularity as chatbots and general assistants, methods have been developed to enable LLMs to follow instructions and align with human preferences. These methods have found success in the field, but their effectiveness has not been demonstrated outside of high-resource languages. In this work, we discuss our experiences in post-training an LLM for instruction-following for English and Finnish. We use a multilingual LLM to translate instruction and preference datasets from English to Finnish. We perform instruction tuning and preference optimization in English and Finnish and evaluate the instruction-following capabilities of the model in both languages. Our results show that with a few hundred Finnish instruction samples we can obtain competitive performance in Finnish instruction-following. We also found that although preference optimization in English offers some cross-lingual benefits, we obtain our best results by using preference data from both languages. We release our model, datasets, and recipes under open licenses at https://huggingface.co/LumiOpen/Poro-34B-chat-OpenAssistant