🤖 AI Summary
Existing Transformer-based methods for human-human interaction motion generation suffer from low efficiency in long-sequence modeling, excessive parameter counts, and poor real-time responsiveness. To address these limitations, this paper proposes the first adaptive spatiotemporal Mamba framework specifically designed for interactive motion generation. Our method introduces a dual-branch state-space model (SSM) architecture and two novel Mamba modules—self-individual and cross-individual—to jointly and efficiently model individual motion dynamics and inter-personal dependencies. We further incorporate parallel spatiotemporal SSMs, adaptive gating, and cross-adaptive feature fusion to enhance expressiveness and efficiency. Evaluated on two standard interaction benchmarks, our approach achieves state-of-the-art performance with only 66M parameters (36% of InterGen’s) and an inference latency of 0.57 seconds per sample—2.2× faster than InterGen—demonstrating a significant improvement in both generation quality and computational efficiency.
📝 Abstract
Human-human interaction generation has garnered significant attention in motion synthesis due to its vital role in understanding humans as social beings. However, existing methods typically rely on transformer-based architectures, which often face challenges related to scalability and efficiency. To address these issues, we propose a novel, efficient human-human interaction generation method based on the Mamba framework, designed to meet the demands of effectively capturing long-sequence dependencies while providing real-time feedback. Specifically, we introduce an adaptive spatio-temporal Mamba framework that utilizes two parallel SSM branches with an adaptive mechanism to integrate the spatial and temporal features of motion sequences. To further enhance the model's ability to capture dependencies within individual motion sequences and the interactions between different individual sequences, we develop two key modules: the self-adaptive spatio-temporal Mamba module and the cross-adaptive spatio-temporal Mamba module, enabling efficient feature learning. Extensive experiments demonstrate that our method achieves state-of-the-art results on two interaction datasets with remarkable quality and efficiency. Compared to the baseline method InterGen, our approach not only improves accuracy but also requires a minimal parameter size of just 66M ,only 36% of InterGen's, while achieving an average inference speed of 0.57 seconds, which is 46% of InterGen's execution time.