đ¤ AI Summary
3D face reconstruction under low-light and occlusion conditionsâsuch as during sleep monitoringâremains challenging for conventional optical methods. Method: This paper proposes the first end-to-end differentiable 3D face reconstruction framework based on radar imagery. We construct a physically synthesized radar rendering dataset incorporating a non-differentiable imaging model, and design an object-specific learnable radar decoder that integrates a 3D Morphable Model (3DMM) with a CNN encoder, enabling unsupervised parameter optimization via an analysis-by-synthesis paradigm. Contributions/Results: (i) The first model-driven 3D face reconstruction framework operating directly in the radar domain; (ii) The first differentiable radar decoder architecture supporting gradient-based optimization; (iii) Empirical validation on both synthetic and real radar data (with ground truth from four subjects): the encoder alone achieves high reconstruction fidelity on synthetic data, while joint training significantly improves generalization to real radar images and preserves fine geometric details.
đ Abstract
The 3D reconstruction of faces gains wide attention in computer vision and is used in many fields of application, for example, animation, virtual reality, and even forensics. This work is motivated by monitoring patients in sleep laboratories. Due to their unique characteristics, sensors from the radar domain have advantages compared to optical sensors, namely penetration of electrically non-conductive materials and independence of light. These advantages of radar signals unlock new applications and require adaptation of 3D reconstruction frameworks. We propose a novel model-based method for 3D reconstruction from radar images. We generate a dataset of synthetic radar images with a physics-based but non-differentiable radar renderer. This dataset is used to train a CNN-based encoder to estimate the parameters of a 3D morphable face model. Whilst the encoder alone already leads to strong reconstructions of synthetic data, we extend our reconstruction in an Analysis-by-Synthesis fashion to a model-based autoencoder. This is enabled by learning the rendering process in the decoder, which acts as an object-specific differentiable radar renderer. Subsequently, the combination of both network parts is trained to minimize both, the loss of the parameters and the loss of the resulting reconstructed radar image. This leads to the additional benefit, that at test time the parameters can be further optimized by finetuning the autoencoder unsupervised on the image loss. We evaluated our framework on generated synthetic face images as well as on real radar images with 3D ground truth of four individuals.