🤖 AI Summary
To advance LLM democratization and enable efficient edge deployment in cloud-edge collaborative scenarios. Method: We propose Fox-1, an open-source small language model (1.6B parameters), featuring a novel three-stage data-curriculum pretraining strategy; a deeper architecture with an expanded vocabulary and Grouped Query Attention (GQA) for joint performance-efficiency optimization at scale; and training on 3 trillion tokens (pretraining) plus 5 billion tokens (instruction tuning), supporting variable-length sequences from 2K to 8K. Contribution/Results: Released under the Apache 2.0 license, Fox-1 significantly outperforms comparable models—including StableLM-2-1.6B and Gemma-2B—across multiple benchmarks. It achieves high inference throughput and low latency, empirically validating the feasibility of lightweight architectures that balance openness, efficiency, and practical utility.
📝 Abstract
We present Fox-1, a series of small language models (SLMs) consisting of Fox-1-1.6B and Fox-1-1.6B-Instruct-v0.1. These models are pre-trained on 3 trillion tokens of web-scraped document data and fine-tuned with 5 billion tokens of instruction-following and multi-turn conversation data. Aiming to improve the pre-training efficiency, Fox-1-1.6B model introduces a novel 3-stage data curriculum across all the training data with 2K-8K sequence length. In architecture design, Fox-1 features a deeper layer structure, an expanded vocabulary, and utilizes Grouped Query Attention (GQA), offering a performant and efficient architecture compared to other SLMs. Fox-1 achieves better or on-par performance in various benchmarks compared to StableLM-2-1.6B, Gemma-2B, Qwen1.5-1.8B, and OpenELM1.1B, with competitive inference speed and throughput. The model weights have been released under the Apache 2.0 license, where we aim to promote the democratization of LLMs and make them fully accessible to the whole open-source community.