🤖 AI Summary
This work investigates how behavioral cloning can enhance causal reasoning capabilities in agents operating within complex 3D video games while achieving real-time performance. Leveraging large-scale human gameplay data, we train deep neural networks with up to 1.2 billion parameters, enabling near-human-level decision-making at interactive speeds on consumer-grade GPUs. We systematically uncover, for the first time, the synergistic effect between model scale and data scale in learning causal policies, and validate this scaling law through a newly constructed multidimensional benchmark for causal reasoning. To support further research in open-world agent development, we publicly release a high-quality gameplay dataset, training code, and pretrained models.
📝 Abstract
Behavior cloning has seen a resurgence as scaling model and data sizes demonstrate strong performance. In this work, we introduce an open recipe for training a video game playing foundation model designed for inference in realtime on a consumer GPU. We release all data (8300+ hours of high quality human gameplay), training and inference code, and pretrained checkpoints under an open license. Empirically, we show that our best model achieves performance competitive with human players across a variety of 3D games. We use this recipe to investigate the scaling laws of behavior cloning, with a focus on causal reasoning. In a controlled toy setting, we first demonstrate that increasing training data and network depth leads to the model learning a more causal policy. We then validate these findings at scale, analyzing models up to 1.2 billion parameters. We observe that the causal improvements seen in the toy domain hold true as model size and training steps increase.