End-to-End Learning of Probabilistic Constellation Shaping through Importance Sampling

πŸ“… 2025-06-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge in probabilistic constellation shaping (PCS) for coded modulation systems, where end-to-end learning traditionally requires manual derivation of gradient termsβ€”error-prone and lacking generalizability. We propose a novel mutual information-driven loss function leveraging importance sampling and automatic differentiation. Our method eliminates the need for hand-crafted analytical gradients of probability mass functions, theoretically guaranteeing unbiased gradient estimation while enabling flexible rate adaptation and approaching Shannon capacity. The framework integrates a deep autoencoder architecture, probabilistic shaping modeling, and differentiable channel simulation. Evaluated on both AWGN and intensity-modulation direct-detection (IM-DD) channels, it achieves performance competitive with state-of-the-art approaches, thereby validating both gradient accuracy and cross-channel generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Probabilistic constellation shaping enables easy rate adaption and has been proven to reduce the gap to Shannon capacity. Constellation point probabilities are optimized to maximize either the mutual information or the bit-wise mutual information. The optimization problem is however challenging even for simple channel models. While autoencoder-based machine learning has been applied successfully to solve this problem [1], it requires manual computation of additional terms for the gradient which is an error-prone task. In this work, we present novel loss functions for autoencoder-based learning of probabilistic constellation shaping for coded modulation systems using automatic differentiation and importance sampling. We show analytically that our proposed approach also uses exact gradients of the constellation point probabilities for the optimization. In simulations, our results closely match the results from [1] for the additive white Gaussian noise channel and a simplified model of the intensity-modulation direct-detection channel.
Problem

Research questions and friction points this paper is trying to address.

Optimizing constellation point probabilities for mutual information
Overcoming gradient computation challenges in autoencoder-based learning
Validating novel loss functions for probabilistic constellation shaping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoencoder-based learning with novel loss functions
Uses automatic differentiation and importance sampling
Optimizes exact gradients of constellation probabilities
πŸ”Ž Similar Papers
No similar papers found.