Steering Large Language Models for Machine Translation Personalization

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge that large language models (LLMs) struggle to adhere to implicit stylistic constraints in low-resource literary translation, this paper proposes a personalized translation method integrating inference-time intervention with multi-example prompting. Methodologically, we introduce a contrastive learning framework built upon sparse autoencoders, which— for the first time—reveals that multi-example prompting and explicit model steering share an identical personalization mechanism at the representation level. We further identify style-sensitive transformer layers, enabling interpretable and transferable stylistic control. Experiments demonstrate that our approach significantly improves stylistic consistency while preserving translation quality, establishing a novel paradigm for implicit style modeling and controllable generation in low-resource settings.

Technology Category

Application Category

📝 Abstract
High-quality machine translation systems based on large language models (LLMs) have simplified the production of personalized translations reflecting specific stylistic constraints. However, these systems still struggle in settings where stylistic requirements are less explicit and might be harder to convey via prompting. We explore various strategies for personalizing LLM-generated translations in low-resource settings, focusing on the challenging literary translation domain. We explore prompting strategies and inference-time interventions for steering model generations towards a personalized style, and propose a contrastive framework exploiting latent concepts extracted from sparse autoencoders to identify salient personalization properties. Our results show that steering achieves strong personalization while preserving translation quality. We further examine the impact of steering on LLM representations, finding model layers with a relevant impact for personalization are impacted similarly by multi-shot prompting and our steering method, suggesting similar mechanism at play.
Problem

Research questions and friction points this paper is trying to address.

Personalizing LLM translations with implicit stylistic constraints
Enhancing literary translation in low-resource settings
Steering model outputs using latent concepts for style preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompting strategies for personalized LLM translations
Contrastive framework using sparse autoencoder latent concepts
Steering model generations to preserve translation quality
🔎 Similar Papers
No similar papers found.