🤖 AI Summary
This paper studies the optimal payoff selection problem for expected utility maximizers under a Bregman–Wasserstein (BW) divergence constraint, designed to control deviation from a reference payoff while allowing asymmetric penalties for upside and downside deviations—better aligning with real-world investment objectives. Methodologically, it provides the first analytical solution to the optimal payoff structure under BW divergence constraints, employing a convex function φ to flexibly encode directional deviation preferences and thereby overcoming the symmetry limitation inherent in classical Wasserstein distance. By integrating convex analysis, optimal transport theory, and stochastic optimization, the authors formulate a utility maximization framework regularized by a Bregman penalty term. Theoretically, they derive a closed-form expression for the optimal payoff. Numerical experiments demonstrate that tuning φ enables precise calibration of risk attitudes and significantly improves alignment between payoff allocation and investor-specific goals.
📝 Abstract
We study optimal payoff choice for an expected utility maximizer under the constraint that their payoff is not allowed to deviate ``too much''from a given benchmark. We solve this problem when the deviation is assessed via a Bregman-Wasserstein (BW) divergence, generated by a convex function $phi$. Unlike the Wasserstein distance (i.e., when $phi(x)=x^2$) the inherent asymmetry of the BW divergence makes it possible to penalize positive deviations different than negative ones. As a main contribution, we provide the optimal payoff in this setting. Numerical examples illustrate that the choice of $phi$ allow to better align the payoff choice with the objectives of investors.