π€ AI Summary
Existing methods often rely on soft penalties to approximate sample-level constraints, which struggle to strictly enforce hard requirements. This work proposes the first sample-wise constrained learning framework based on the sequential penalty method, enabling strict satisfaction of per-sample constraints within deep learning while providing convergence guarantees. By systematically integrating sequential penalty mechanisms into end-to-end training, the approach balances theoretical rigor with practical feasibility. Experiments on image processing tasks demonstrate that the proposed framework not only ensures strict adherence to constraints but also maintains competitive model performance.
π Abstract
In many learning tasks, certain requirements on the processing of individual data samples should arguably be formalized as strict constraints in the underlying optimization problem, rather than by means of arbitrary penalties. We show that, in these scenarios, learning can be carried out exploiting a sequential penalty method that allows to properly deal with constraints. The proposed algorithm is shown to possess convergence guarantees under assumptions that are reasonable in deep learning scenarios. Moreover, the results of experiments on image processing tasks show that the method is indeed viable to be used in practice.