🤖 AI Summary
This work addresses the legal paraphrasing of financial consumer complaint texts—transforming informal, colloquial inputs into clear, formal, and domain-precise legal argumentation. We propose the Multi-Scale Model Interaction (MSMI) framework, which synergistically integrates a lightweight discriminator with a large language model (LLM), enabling dynamic, feedback-driven iterative refinement of generated text. An adversarial robustness enhancement mechanism is further incorporated to improve resilience against input perturbations. MSMI supports efficient prompt engineering and achieves significant gains over single-prompt baselines on our curated Chinese financial dispute dataset, FinDR. It also demonstrates strong cross-domain generalization and improved robustness on multiple short-text benchmarks. Key contributions include a fine-grained discriminative–generative feedback loop tailored for legal text generation and a lightweight architectural design optimized for domain-specific legal paraphrasing.
📝 Abstract
Legal writing demands clarity, formality, and domain-specific precision-qualities often lacking in documents authored by individuals without legal training. To bridge this gap, this paper explores the task of legal text refinement that transforms informal, conversational inputs into persuasive legal arguments. We introduce FinDR, a Chinese dataset of financial dispute records, annotated with official judgments on claim reasonableness. Our proposed method, Multi-Scale Model Interaction (MSMI), leverages a lightweight classifier to evaluate outputs and guide iterative refinement by Large Language Models (LLMs). Experimental results demonstrate that MSMI significantly outperforms single-pass prompting strategies. Additionally, we validate the generalizability of MSMI on several short-text benchmarks, showing improved adversarial robustness. Our findings reveal the potential of multi-model collaboration for enhancing legal document generation and broader text refinement tasks.