🤖 AI Summary
Deploying high-security, low-latency AI models for military applications on resource-constrained edge devices poses significant challenges in balancing performance, sovereignty, and real-time inference.
Method: We propose a military-task-oriented lightweight large language model (LLM) optimization framework, built upon the open-source 20B-parameter gpt-oss-20b architecture. It employs fine-tuning on 1.6 million high-quality military-domain samples and integrates efficient inference optimizations—including quantization, kernel fusion, and memory-efficient attention.
Contribution/Results: Our 2B-parameter model achieves ≥95% of GPT-5’s performance across four critical military tasks—tactical decision-making, battlefield medicine, cyber offense/defense, and intelligence analysis—while maintaining lossless accuracy on mainstream general benchmarks (e.g., MMLU, C-Eval). Fully offline and air-gapped deployment is supported; empirical evaluation confirms efficient operation on typical edge hardware (<16 GB RAM), ensuring data sovereignty, sub-second latency, and robust generalization under operational constraints.
📝 Abstract
We present EdgeRunner 20B, a fine-tuned version of gpt-oss-20b optimized for military tasks. EdgeRunner 20B was trained on 1.6M high-quality records curated from military documentation and websites. We also present four new tests sets: (a) combat arms, (b) combat medic, (c) cyber operations, and (d) mil-bench-5k (general military knowledge). On these military test sets, EdgeRunner 20B matches or exceeds GPT-5 task performance with 95%+ statistical significance, except for the high reasoning setting on the combat medic test set and the low reasoning setting on the mil-bench-5k test set. Versus gpt-oss-20b, there is no statistically-significant regression on general-purpose benchmarks like ARC-C, GPQA Diamond, GSM8k, IFEval, MMLU Pro, or TruthfulQA, except for GSM8k in the low reasoning setting. We also present analyses on hyperparameter settings, cost, and throughput. These findings show that small, locally-hosted models are ideal solutions for data-sensitive operations such as in the military domain, allowing for deployment in air-gapped edge devices.