🤖 AI Summary
To address inaccurate multimodal user intent understanding by domestic service robots under few-shot conditions, this paper proposes a semantics-controllable dialogue–scene image co-generation framework for data augmentation. To overcome the bottlenecks of scarce real-world multimodal data and high annotation costs, our method integrates large language models (LLMs) for contextualized dialogue modeling and reasoning, and leverages Stable Diffusion to synthesize high-fidelity, semantically aligned environment images. This establishes an end-to-end synthetic data generation and fine-tuning pipeline. To the best of our knowledge, this is the first framework enabling joint, controllable generation of linguistic intent and visual context. Experimental results demonstrate substantial improvements in action selection accuracy on real-world benchmark datasets, achieving state-of-the-art (SOTA) performance. The results validate that synthetically generated multimodal data effectively enhances downstream models’ generalization capability across modalities.
📝 Abstract
When designing robots to assist in everyday human activities, it is crucial to enhance user requests with visual cues from their surroundings for improved intent understanding. This process is defined as a multimodal classification task. However, gathering a large-scale dataset encompassing both visual and linguistic elements for model training is challenging and time-consuming. To address this issue, our paper introduces a novel framework focusing on data augmentation in robotic assistance scenarios, encompassing both dialogues and related environmental imagery. This approach involves leveraging a sophisticated large language model to simulate potential conversations and environmental contexts, followed by the use of a stable diffusion model to create images depicting these environments. The additionally generated data serves to refine the latest multimodal models, enabling them to more accurately determine appropriate actions in response to user interactions with the limited target data. Our experimental results, based on a dataset collected from real-world scenarios, demonstrate that our methodology significantly enhances the robot's action selection capabilities, achieving the state-of-the-art performance.