🤖 AI Summary
Enabling assistive robots to understand object functionality in unstructured environments requires robust alignment between linguistic action queries and physically feasible object interactions. Method: We propose a modular neuro-symbolic framework that integrates a verb–attribute–object knowledge graph, explicitly incorporating grasp feasibility into functional reasoning. The framework jointly leverages vision-language alignment, energy-based grasp inference, and functional compatibility modeling; symbolic reasoning generates interpretable, stepwise inference paths for fine-grained diagnostics and customizable decision-making. Contribution/Results: Our approach achieves state-of-the-art performance across static-scene understanding, ImageNet-based functional retrieval, and real-world robotic manipulation tasks—covering 20 verbs and 39 objects. It demonstrates robustness to perceptual noise, component-level interpretability, and high transparency and reliability.
📝 Abstract
Assistive robots operating in unstructured environments must understand not only what objects are, but what they can be used for. This requires grounding language-based action queries to objects that both afford the requested function and can be physically retrieved. Existing approaches often rely on black-box models or fixed affordance labels, limiting transparency, controllability, and reliability for human-facing applications. We introduce CRAFT-E, a modular neuro-symbolic framework that composes a structured verb-property-object knowledge graph with visual-language alignment and energy-based grasp reasoning. The system generates interpretable grounding paths that expose the factors influencing object selection and incorporates grasp feasibility as an integral part of affordance inference. We further construct a benchmark dataset with unified annotations for verb-object compatibility, segmentation, and grasp candidates, and deploy the full pipeline on a physical robot. CRAFT-E achieves competitive performance in static scenes, ImageNet-based functional retrieval, and real-world trials involving 20 verbs and 39 objects. The framework remains robust under perceptual noise and provides transparent, component-level diagnostics. By coupling symbolic reasoning with embodied perception, CRAFT-E offers an interpretable and customizable alternative to end-to-end models for affordance-grounded object selection, supporting trustworthy decision-making in assistive robotic systems.