Discovering and using Spelke segments

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of conventional image segmentation—its heavy reliance on semantic categories, which hinders applicability to physical interaction tasks. We propose a category-agnostic segmentation paradigm grounded in Spelke objects: perceptual units defined by causal motion constraints and capacity for coordinated movement. To this end, we introduce SpelkeBench, the first benchmark dataset explicitly designed for Spelke-object segmentation. We further present SpelkeNet, a novel method that leverages a visual world model to predict future motion distributions, integrates motion affordance and expected displacement maps, and performs statistical counterfactual probing via virtual tactile interaction to identify motion-statistically coherent visual segments. Experiments demonstrate that SpelkeNet significantly outperforms supervised baselines—including SAM—on SpelkeBench, and consistently improves performance across diverse models on the 3DEditBench physical manipulation benchmark. Our approach establishes a new perception–action coupling paradigm for embodied intelligence.

Technology Category

Application Category

📝 Abstract
Segments in computer vision are often defined by semantic considerations and are highly dependent on category-specific conventions. In contrast, developmental psychology suggests that humans perceive the world in terms of Spelke objects--groupings of physical things that reliably move together when acted on by physical forces. Spelke objects thus operate on category-agnostic causal motion relationships which potentially better support tasks like manipulation and planning. In this paper, we first benchmark the Spelke object concept, introducing the SpelkeBench dataset that contains a wide variety of well-defined Spelke segments in natural images. Next, to extract Spelke segments from images algorithmically, we build SpelkeNet, a class of visual world models trained to predict distributions over future motions. SpelkeNet supports estimation of two key concepts for Spelke object discovery: (1) the motion affordance map, identifying regions likely to move under a poke, and (2) the expected-displacement map, capturing how the rest of the scene will move. These concepts are used for "statistical counterfactual probing", where diverse "virtual pokes" are applied on regions of high motion-affordance, and the resultant expected displacement maps are used define Spelke segments as statistical aggregates of correlated motion statistics. We find that SpelkeNet outperforms supervised baselines like SegmentAnything (SAM) on SpelkeBench. Finally, we show that the Spelke concept is practically useful for downstream applications, yielding superior performance on the 3DEditBench benchmark for physical object manipulation when used in a variety of off-the-shelf object manipulation models.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking Spelke object concept with SpelkeBench dataset
Developing SpelkeNet to predict motion for Spelke segmentation
Applying Spelke segments to improve physical object manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpelkeNet predicts future motion distributions
Statistical counterfactual probing defines Spelke segments
Motion affordance maps identify movable regions
🔎 Similar Papers
No similar papers found.
R
Rahul Venkatesh
Stanford University
Klemen Kotar
Klemen Kotar
PhD Candidate, Stanford University
Artificial Intelligence
L
Lilian Naing Chen
Stanford University
S
Seungwoo Kim
Stanford University
L
Luca Thomas Wheeler
Stanford University
J
Jared Watrous
Stanford University
A
Ashley Xu
Stanford University
G
Gia Ancone
Stanford University
W
Wanhee Lee
Stanford University
H
Honglin Chen
OpenAI
Daniel Bear
Daniel Bear
Stanford University
Sensory SystemsPerceptionEvolutionArtificial Intelligence
Stefan Stojanov
Stefan Stojanov
Postdoc at Stanford Vision Lab and Neuro AI Lab
Computer VisionMachine Learning
Daniel Yamins
Daniel Yamins
Associate Professor of Computer Science and of Psychology, Stanford University
Computational NeuroscienceAIComputational Cognitive ScienceComputer VisionSelf-supervised