Chinese-Vicuna: A Chinese Instruction-following Llama-based Model

๐Ÿ“… 2025-04-17
๐Ÿ“ˆ Citations: 9
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the weak instruction-following capability and poor domain adaptability of Chinese large language models (LLMs) in low-resource settings, this paper proposes a lightweight, efficient, and domain-extensible Chinese instruction-tuning framework. Built upon the LLaMA architecture, it constructs a high-quality hybrid Chinese instruction dataset by integrating BELLE and Guanaco. A modular LoRA/QLoRA fine-tuning strategy is designed, supporting 4-bit quantization and cross-platform inference on CPU/GPU. The framework introduces a novel multi-turn dialogue state management mechanism and an one-click model conversion tool. Experimental results demonstrate significant performance gains across medical Q&A, legal consultation, code generation, and translation tasksโ€”achieving state-of-the-art results among open-source Chinese 7B models. The optimized model achieves sub-800ms single-turn latency on an RTX-2080Ti, enabling deployment on consumer-grade hardware and rapid adaptation to vertical domains.

Technology Category

Application Category

๐Ÿ“ Abstract
Chinese-Vicuna is an open-source, resource-efficient language model designed to bridge the gap in Chinese instruction-following capabilities by fine-tuning Meta's LLaMA architecture using Low-Rank Adaptation (LoRA). Targeting low-resource environments, it enables cost-effective deployment on consumer GPUs (e.g., RTX-2080Ti for 7B models) and supports domain-specific adaptation in fields like healthcare and law. By integrating hybrid datasets (BELLE and Guanaco) and 4-bit quantization (QLoRA), the model achieves competitive performance in tasks such as translation, code generation, and domain-specific Q&A. The project provides a comprehensive toolkit for model conversion, CPU inference, and multi-turn dialogue interfaces, emphasizing accessibility for researchers and developers. Evaluations indicate competitive performance across medical tasks, multi-turn dialogue coherence, and real-time legal updates. Chinese-Vicuna's modular design, open-source ecosystem, and community-driven enhancements position it as a versatile foundation for Chinese LLM applications.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap in Chinese instruction-following capabilities
Enabling cost-effective deployment on consumer GPUs
Supporting domain-specific adaptation in healthcare and law
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes LLaMA with LoRA for Chinese instructions
Uses 4-bit QLoRA for efficient resource deployment
Integrates hybrid datasets for domain-specific adaptation
๐Ÿ”Ž Similar Papers
No similar papers found.