RFM-Pose:Reinforcement-Guided Flow Matching for Fast Category-Level 6D Pose Estimation

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of pose ambiguity due to rotational symmetry in category-level 6D object pose estimation and the high computational cost of diffusion model sampling. The authors propose a novel approach that integrates flow matching generative models with reinforcement learning. By leveraging optimal transport to construct an efficient pose generation trajectory, the sampling process is formulated as a Markov decision process, and proximal policy optimization (PPO) is employed to jointly optimize pose generation and scoring. This method represents the first integration of flow matching with reinforcement learning, treating the flow field as a learnable policy to enable end-to-end joint optimization. Evaluated on the REAL275 benchmark, the approach achieves state-of-the-art performance with significantly reduced computational overhead and demonstrates strong generalization by effectively transferring to pose tracking tasks.

Technology Category

Application Category

📝 Abstract
Object pose estimation is a fundamental problem in computer vision and plays a critical role in virtual reality and embodied intelligence, where agents must understand and interact with objects in 3D space. Recently, score based generative models have to some extent solved the rotational symmetry ambiguity problem in category level pose estimation, but their efficiency remains limited by the high sampling cost of score-based diffusion. In this work, we propose a new framework, RFM-Pose, that accelerates category-level 6D object pose generation while actively evaluating sampled hypotheses. To improve sampling efficiency, we adopt a flow-matching generative model and generate pose candidates along an optimal transport path from a simple prior to the pose distribution. To further refine these candidates, we cast the flow-matching sampling process as a Markov decision process and apply proximal policy optimization to fine-tune the sampling policy. In particular, we interpret the flow field as a learnable policy and map an estimator to a value network, enabling joint optimization of pose generation and hypothesis scoring within a reinforcement learning framework. Experiments on the REAL275 benchmark demonstrate that RFM-Pose achieves favorable performance while significantly reducing computational cost. Moreover, similar to prior work, our approach can be readily adapted to object pose tracking and attains competitive results in this setting.
Problem

Research questions and friction points this paper is trying to address.

6D pose estimation
category-level
rotational symmetry ambiguity
sampling efficiency
score-based generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

flow matching
reinforcement learning
6D pose estimation
optimal transport
proximal policy optimization
🔎 Similar Papers
No similar papers found.
D
Diya He
Department of Automation, University of Science and Technology of China, Hefei 230027, China
Q
Qingchen Liu
Department of Automation, University of Science and Technology of China, Hefei 230027, China
C
Cong Zhang
Department of Automation, University of Science and Technology of China, Hefei 230027, China
Jiahu Qin
Jiahu Qin
University of Science and Technology of China
Autonomous Intelligent SystemsCyber-Physical SystemsHuman-Robot Interaction