ASMR: Augmenting Life Scenario using Large Generative Models for Robotic Action Reflection

📅 2025-06-16
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate multimodal user intent understanding by domestic service robots under few-shot conditions, this paper proposes a semantics-controllable dialogue–scene image co-generation framework for data augmentation. To overcome the bottlenecks of scarce real-world multimodal data and high annotation costs, our method integrates large language models (LLMs) for contextualized dialogue modeling and reasoning, and leverages Stable Diffusion to synthesize high-fidelity, semantically aligned environment images. This establishes an end-to-end synthetic data generation and fine-tuning pipeline. To the best of our knowledge, this is the first framework enabling joint, controllable generation of linguistic intent and visual context. Experimental results demonstrate substantial improvements in action selection accuracy on real-world benchmark datasets, achieving state-of-the-art (SOTA) performance. The results validate that synthetically generated multimodal data effectively enhances downstream models’ generalization capability across modalities.

Technology Category

Application Category

📝 Abstract
When designing robots to assist in everyday human activities, it is crucial to enhance user requests with visual cues from their surroundings for improved intent understanding. This process is defined as a multimodal classification task. However, gathering a large-scale dataset encompassing both visual and linguistic elements for model training is challenging and time-consuming. To address this issue, our paper introduces a novel framework focusing on data augmentation in robotic assistance scenarios, encompassing both dialogues and related environmental imagery. This approach involves leveraging a sophisticated large language model to simulate potential conversations and environmental contexts, followed by the use of a stable diffusion model to create images depicting these environments. The additionally generated data serves to refine the latest multimodal models, enabling them to more accurately determine appropriate actions in response to user interactions with the limited target data. Our experimental results, based on a dataset collected from real-world scenarios, demonstrate that our methodology significantly enhances the robot's action selection capabilities, achieving the state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robotic intent understanding with visual and linguistic cues
Addressing data scarcity in multimodal robotic training scenarios
Improving robot action selection accuracy through generative data augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging large language model for conversation simulation
Using stable diffusion model for environment imagery
Augmenting multimodal models with generated data
🔎 Similar Papers
No similar papers found.