🤖 AI Summary
This work proposes a specialized model training paradigm tailored to real-world software development scenarios to address the limitations of current agents in planning and coding for complex, long-horizon software engineering tasks. The approach employs a two-stage training strategy: first, continual pretraining enhances domain knowledge and foundational coding capabilities; second, large-scale reinforcement learning optimizes end-to-end multi-step reasoning, precise execution, and task coherence. Training is conducted within the Cursor framework—the same environment used at deployment—and evaluated on CursorBench, a new benchmark constructed from extensive real-world codebases. Experimental results show that the model achieves 61.3% accuracy on CursorBench, substantially outperforming prior methods, and attains state-of-the-art performance with scores of 61.7 on Terminal-Bench and 73.7 on SWE-bench Multilingual.
📝 Abstract
Composer 2 is a specialized model designed for agentic software engineering. The model demonstrates strong long-term planning and coding intelligence while maintaining the ability to efficiently solve problems for interactive use. The model is trained in two phases: first, continued pretraining to improve the model's knowledge and latent coding ability, followed by large-scale reinforcement learning to improve end-to-end coding performance through stronger reasoning, accurate multi-step execution, and coherence on long-horizon realistic coding problems. We develop infrastructure to support training in the same Cursor harness that is used by the deployed model, with equivalent tools and structure, and use environments that match real problems closely. To measure the ability of the model on increasingly difficult tasks, we introduce a benchmark derived from real software engineering problems in large codebases including our own. Composer 2 is a frontier-level coding model and demonstrates a process for training strong domain-specialized models. On our CursorBench evaluations the model achieves a major improvement in accuracy compared to previous Composer models (61.3). On public benchmarks the model scores 61.7 on Terminal-Bench and 73.7 on SWE-bench Multilingual in our harness, comparable to state-of-the-art systems.