đ¤ AI Summary
Existing world models lack both strong controllability and flexible prompting capabilities for structured scene understanding. Method: We propose a âprobabilistic predictionâstructural extractionâintegrated optimizationâ three-stage iterative learning framework. First, zero-shot causal inference disentangles implicit intermediate representations (e.g., optical flow, depth, semantic segmentation) from raw video data; these are then encoded as novel learnable tokens integrated into a unified, LLM-inspired prompting architecture. Technically, the framework synergistically combines probabilistic graphical models, stochastic autoregressive modeling with random access, causal inference, and self-supervised learning. Contribution/Results: Evaluated on trillion-frame video datasets, our model achieves state-of-the-art performance across multiple vision tasksâincluding optical flow estimation, monocular depth prediction, and object segmentationâwhile enabling cross-task prompt-based control and continual performance improvement. To our knowledge, this is the first work to unify structured world modeling with general-purpose, instruction-tunable prompting mechanisms.
đ Abstract
We present Probabilistic Structure Integration (PSI), a system for learning richly controllable and flexibly promptable world models from data. PSI consists of a three-step cycle. The first step, Probabilistic prediction, involves building a probabilistic graphical model Psi of the data, in the form of a random-access autoregressive sequence model. Psi supports a complete set of learned conditional distributions describing the dependence of any variables in the data on any other set of variables. In step 2, Structure extraction, we show how to extract underlying low-dimensional properties in the data, corresponding to a diverse set of meaningful "intermediate structures", in a zero-shot fashion via causal inference on Psi. Step 3, Integration, completes the cycle by converting these structures into new token types that are then continually mixed back into the training diet as conditioning signals and prediction targets. Each such cycle augments the capabilities of Psi, both allowing it to model the underlying data better, and creating new control handles -- akin to an LLM-like universal prompting language. We train an instance of Psi on 1.4 trillion tokens of internet video data; we use it to perform a variety of useful video prediction and understanding inferences; we extract state-of-the-art optical flow, self-supervised depth and object segmentation; and we use these structures to support a full cycle of predictive improvements.