Coarse-Guided Visual Generation via Weighted h-Transform Sampling

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently generating high-quality visual samples from low-quality, coarse references without requiring model training or reliance on forward degradation operators. The authors propose a training-free guidance generation method that introduces the h-transform—previously unexplored in this context—into coarse-guided synthesis. By incorporating a drift function to modify transition probabilities during diffusion sampling and designing a noise-level-adaptive weighting schedule, the approach effectively balances guidance strength and sample fidelity. Extensive experiments demonstrate that the method achieves high-fidelity and highly generalizable coarse-guided synthesis in both image and video generation tasks, significantly outperforming existing training-free guidance strategies.

Technology Category

Application Category

📝 Abstract
Coarse-guided visual generation, which synthesizes fine visual samples from degraded or low-fidelity coarse references, is essential for various real-world applications. While training-based approaches are effective, they are inherently limited by high training costs and restricted generalization due to paired data collection. Accordingly, recent training-free works propose to leverage pretrained diffusion models and incorporate guidance during the sampling process. However, these training-free methods either require knowing the forward (fine-to-coarse) transformation operator, e.g., bicubic downsampling, or are difficult to balance between guidance and synthetic quality. To address these challenges, we propose a novel guided method by using the h-transform, a tool that can constrain stochastic processes (e.g., sampling process) under desired conditions. Specifically, we modify the transition probability at each sampling timestep by adding to the original differential equation with a drift function, which approximately steers the generation toward the ideal fine sample. To address unavoidable approximation errors, we introduce a noise-level-aware schedule that gradually de-weights the term as the error increases, ensuring both guidance adherence and high-quality synthesis. Extensive experiments across diverse image and video generation tasks demonstrate the effectiveness and generalization of our method.
Problem

Research questions and friction points this paper is trying to address.

coarse-guided visual generation
training-free
diffusion models
h-transform
sampling guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

coarse-guided generation
h-transform sampling
training-free diffusion
drift function
noise-level-aware weighting
🔎 Similar Papers
No similar papers found.
Yanghao Wang
Yanghao Wang
Peking University
neuromorphic computingmemristornonlinear dynamics
Z
Ziqi Jiang
The Hong Kong University of Science and Technology
Zhen Wang
Zhen Wang
School of Mathematics and Computer Science, Yan'an University
ChaosDynamical Systems
L
Long Chen
The Hong Kong University of Science and Technology