CameraMaster: Unified Camera Semantic-Parameter Control for Photography Retouching

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image post-processing methods struggle to achieve physically consistent, parameter-precisely controllable photographic enhancement due to ambiguous text prompts, tightly coupled model architectures, difficulties in multi-parameter coordination, and insensitivity to fine-tuning. This paper proposes the Camera-Aware Diffusion framework (CAP-Diff), which decouples camera parameters—exposure, white balance, and zoom—from semantic content. CAP-Diff employs parameter-embedding modulation, hierarchical cross-attention, and temporal gating to jointly model semantic intent and physical parameters. The method enables fine-grained, linearly responsive, and composable multi-parameter control without requiring separate modules per parameter. Evaluated on a 78K-image dataset, CAP-Diff demonstrates high parameter-response linearity and natural composability, significantly outperforming existing controllable image generation approaches in both physical fidelity and parametric controllability.

Technology Category

Application Category

📝 Abstract
Text-guided diffusion models have greatly advanced image editing and generation. However, achieving physically consistent image retouching with precise parameter control (e.g., exposure, white balance, zoom) remains challenging. Existing methods either rely solely on ambiguous and entangled text prompts, which hinders precise camera control, or train separate heads/weights for parameter adjustment, which compromises scalability, multi-parameter composition, and sensitivity to subtle variations. To address these limitations, we propose CameraMaster, a unified camera-aware framework for image retouching. The key idea is to explicitly decouple the camera directive and then coherently integrate two critical information streams: a directive representation that captures the photographer's intent, and a parameter embedding that encodes precise camera settings. CameraMaster first uses the camera parameter embedding to modulate both the camera directive and the content semantics. The modulated directive is then injected into the content features via cross-attention, yielding a strongly camera-sensitive semantic context. In addition, the directive and camera embeddings are injected as conditioning and gating signals into the time embedding, enabling unified, layer-wise modulation throughout the denoising process and enforcing tight semantic-parameter alignment. To train and evaluate CameraMaster, we construct a large-scale dataset of 78K image-prompt pairs annotated with camera parameters. Extensive experiments show that CameraMaster produces monotonic and near-linear responses to parameter variations, supports seamless multi-parameter composition, and significantly outperforms existing methods.
Problem

Research questions and friction points this paper is trying to address.

Achieving physically consistent image retouching with precise camera parameter control
Overcoming limitations of text-only methods and separate parameter training approaches
Enabling unified semantic-parameter alignment for photography enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified camera-aware framework for image retouching
Decouples camera directive and parameter embedding streams
Modulates denoising process with semantic-parameter alignment
🔎 Similar Papers
Qirui Yang
Qirui Yang
Tianjin University
Computational photographyAI ISP
Y
Yang Yang
vivo Mobile Communication Co., Ltd
Y
Ying Zeng
vivo Mobile Communication Co., Ltd
Xiaobin Hu
Xiaobin Hu
Tencent Youtu Lab;Technische Universität München (TUM)
Deep learningComputer visionVLMAgents
B
Bo Li
vivo Mobile Communication Co., Ltd
H
Huanjing Yue
Tianjin University
J
Jingyu Yang
Tianjin University
Peng-Tao Jiang
Peng-Tao Jiang
Researcher, vivo
Diffusion ModelsDense PredictionsVisual Attention