🤖 AI Summary
To address inter-flow unfairness, inflexible congestion control, and flow starvation in Wi-Fi networks, this paper proposes TCP-LLM—a novel framework that integrates large language models (e.g., Llama) directly into the TCP protocol layer for the first time. Leveraging instruction tuning and prompt engineering, TCP-LLM incorporates network state embeddings and a TCP action decoding module, enabling lightweight, handcrafted-DNN-free adaptation. Crucially, it supports multiple tasks—including fairness regulation, congestion control, and starvation mitigation—without retraining the base model. Evaluated in NS-3, TCP-LLM improves Jain’s fairness index by 37%, reduces training overhead by 90% compared to state-of-the-art AI-based TCP schemes, and demonstrates strong robustness under dynamic channel conditions. Its core innovation lies in transferring the LLM’s generalization capability to low-level network protocol optimization, thereby substantially lowering modeling costs and enhancing adaptability to dynamic network environments.
📝 Abstract
The new transmission control protocol (TCP) relies on Deep Learning (DL) for prediction and optimization, but requires significant manual effort to design deep neural networks (DNNs) and struggles with generalization in dynamic environments. Inspired by the success of large language models (LLMs), this study proposes TCP-LLM, a novel framework leveraging LLMs for TCP applications. TCP-LLM utilizes pre-trained knowledge to reduce engineering effort, enhance generalization, and deliver superior performance across diverse TCP tasks. Applied to reducing flow unfairness, adapting congestion control, and preventing starvation, TCP-LLM demonstrates significant improvements over TCP with minimal fine-tuning.