๐ค AI Summary
To address the weak instruction-following capability and poor domain adaptability of Chinese large language models (LLMs) in low-resource settings, this paper proposes a lightweight, efficient, and domain-extensible Chinese instruction-tuning framework. Built upon the LLaMA architecture, it constructs a high-quality hybrid Chinese instruction dataset by integrating BELLE and Guanaco. A modular LoRA/QLoRA fine-tuning strategy is designed, supporting 4-bit quantization and cross-platform inference on CPU/GPU. The framework introduces a novel multi-turn dialogue state management mechanism and an one-click model conversion tool. Experimental results demonstrate significant performance gains across medical Q&A, legal consultation, code generation, and translation tasksโachieving state-of-the-art results among open-source Chinese 7B models. The optimized model achieves sub-800ms single-turn latency on an RTX-2080Ti, enabling deployment on consumer-grade hardware and rapid adaptation to vertical domains.
๐ Abstract
Chinese-Vicuna is an open-source, resource-efficient language model designed to bridge the gap in Chinese instruction-following capabilities by fine-tuning Meta's LLaMA architecture using Low-Rank Adaptation (LoRA). Targeting low-resource environments, it enables cost-effective deployment on consumer GPUs (e.g., RTX-2080Ti for 7B models) and supports domain-specific adaptation in fields like healthcare and law. By integrating hybrid datasets (BELLE and Guanaco) and 4-bit quantization (QLoRA), the model achieves competitive performance in tasks such as translation, code generation, and domain-specific Q&A. The project provides a comprehensive toolkit for model conversion, CPU inference, and multi-turn dialogue interfaces, emphasizing accessibility for researchers and developers. Evaluations indicate competitive performance across medical tasks, multi-turn dialogue coherence, and real-time legal updates. Chinese-Vicuna's modular design, open-source ecosystem, and community-driven enhancements position it as a versatile foundation for Chinese LLM applications.