The nextAI Solution to the NeurIPS 2023 LLM Efficiency Challenge

📅 2026-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the NeurIPS 2023 Large Language Model Efficiency Challenge by efficiently fine-tuning the 70B-parameter LLaMA-2 model within the constraints of a single A100 40GB GPU and a 24-hour time limit. By integrating QLoRA with FlashAttention-2, curating a multi-version open-source instruction dataset, and systematically optimizing LoRA hyperparameters, the approach substantially reduces both memory footprint and computational demands at minimal resource cost. Despite these stringent efficiency constraints, the method maintains high accuracy across multiple question-answering benchmarks, achieving a strong balance between model performance and computational efficiency.

Technology Category

Application Category

📝 Abstract
The rapid evolution of Large Language Models (LLMs) has significantly impacted the field of natural language processing, but their growing complexity raises concerns about resource usage and transparency. Addressing these challenges, we participated in the NeurIPS LLM Efficiency Challenge, aiming to fine-tune a foundation model within stringent constraints. Our focus was the LLaMa2 70 billion model, optimized on a single A100 40GB GPU within a 24-hour limit. Our methodology hinged on a custom dataset, carefully assembled from diverse open-source resources and benchmark tests, aligned with the challenge's open-source ethos. Our approach leveraged Quantized-Low Rank Adaptation (QLoRA) Fine tuning, integrated with advanced attention mechanisms like Flash Attention 2. We experimented with various configurations of the LoRA technique, optimizing the balance between computational efficiency and model accuracy. Our fine-tuning strategy was underpinned by the creation and iterative testing of multiple dataset compositions, leading to the selection of a version that demonstrated robust performance across diverse tasks and benchmarks. The culmination of our efforts was an efficiently fine-tuned LLaMa2 70B model that operated within the constraints of a single GPU, showcasing not only a significant reduction in resource utilization but also high accuracy across a range of QA benchmarks. Our study serves as a testament to the feasibility of optimizing large-scale models in resource-constrained environments, emphasizing the potential of LLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Efficiency
Fine-tuning
Resource Constraints
LLM Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

QLoRA
Flash Attention 2
LLM efficiency
resource-constrained fine-tuning
LLaMA2
🔎 Similar Papers
No similar papers found.
G
Gyuwon Park
Department of Computer Science and Engineering, UNIST
D
DongIl Shin
CJ Corporation
S
SolGil Oh
CJ Corporation
S
SangGi Ryu
CJ Corporation
Byung-Hak Kim
Byung-Hak Kim
Hyundai Card // Previously CJ Group, AKASA, Udacity, Capio, Marvell, A&M PhD
AIMachine LearningInformation Theory