Problem
Research questions and friction points this paper is trying to address.
Addresses inefficiency in deep learning gradient descent
Introduces auxiliary variables to separate neural network layers
Ensures consistency between reformulated and original loss functions
Innovation
Methods, ideas, or system contributions that make the work stand out.
Introduces auxiliary variables for layer separation
Uses self-adaptive weights for loss consistency
Reformulates loss functions for easier optimization