🤖 AI Summary
This work addresses the NeurIPS 2023 Large Language Model Efficiency Challenge by efficiently fine-tuning the 70B-parameter LLaMA-2 model within the constraints of a single A100 40GB GPU and a 24-hour time limit. By integrating QLoRA with FlashAttention-2, curating a multi-version open-source instruction dataset, and systematically optimizing LoRA hyperparameters, the approach substantially reduces both memory footprint and computational demands at minimal resource cost. Despite these stringent efficiency constraints, the method maintains high accuracy across multiple question-answering benchmarks, achieving a strong balance between model performance and computational efficiency.
📝 Abstract
The rapid evolution of Large Language Models (LLMs) has significantly impacted the field of natural language processing, but their growing complexity raises concerns about resource usage and transparency. Addressing these challenges, we participated in the NeurIPS LLM Efficiency Challenge, aiming to fine-tune a foundation model within stringent constraints. Our focus was the LLaMa2 70 billion model, optimized on a single A100 40GB GPU within a 24-hour limit. Our methodology hinged on a custom dataset, carefully assembled from diverse open-source resources and benchmark tests, aligned with the challenge's open-source ethos. Our approach leveraged Quantized-Low Rank Adaptation (QLoRA) Fine tuning, integrated with advanced attention mechanisms like Flash Attention 2. We experimented with various configurations of the LoRA technique, optimizing the balance between computational efficiency and model accuracy. Our fine-tuning strategy was underpinned by the creation and iterative testing of multiple dataset compositions, leading to the selection of a version that demonstrated robust performance across diverse tasks and benchmarks. The culmination of our efforts was an efficiently fine-tuned LLaMa2 70B model that operated within the constraints of a single GPU, showcasing not only a significant reduction in resource utilization but also high accuracy across a range of QA benchmarks. Our study serves as a testament to the feasibility of optimizing large-scale models in resource-constrained environments, emphasizing the potential of LLMs in real-world applications.