🤖 AI Summary
This work addresses the cross-domain adaptation challenge in generating robot control code from video demonstrations, where discrepancies in perceptual and physical environments often render task programs ineffective. To tackle this issue, the authors propose NeSyCR, a neuro-symbolic counterfactual reasoning framework that integrates vision-language models, symbolic abstraction, and counterfactual inference. NeSyCR extracts symbolic trajectories from demonstration videos and leverages observations of the target environment to infer counterfactual states, thereby producing verifiable and interpretable program repairs. Evaluated on both simulated and real-world manipulation tasks, NeSyCR improves task success rates by 31.14% over the strongest baseline, Statler, substantially advancing beyond current limitations in causal understanding and program repair for robotic execution.
📝 Abstract
Recent advances in Vision-Language Models (VLMs) have enabled video-instructed robotic programming, allowing agents to interpret video demonstrations and generate executable control code. We formulate video-instructed robotic programming as a cross-domain adaptation problem, where perceptual and physical differences between demonstration and deployment induce procedural mismatches. However, current VLMs lack the procedural understanding needed to reformulate causal dependencies and achieve task-compatible behavior under such domain shifts. We introduce NeSyCR, a neurosymbolic counterfactual reasoning framework that enables verifiable adaptation of task procedures, providing a reliable synthesis of code policies. NeSyCR abstracts video demonstrations into symbolic trajectories that capture the underlying task procedure. Given deployment observations, it derives counterfactual states that reveal cross-domain incompatibilities. By exploring the symbolic state space with verifiable checks, NeSyCR proposes procedural revisions that restore compatibility with the demonstrated procedure. NeSyCR achieves a 31.14% improvement in task success over the strongest baseline Statler, showing robust cross-domain adaptation across both simulated and real-world manipulation tasks.