๐ค AI Summary
This work identifies a temporal inversion phenomenon in Conformer encoders during audio-visual event detection (AED) training: self-attention progressively suppresses the feed-forward path, causing cross-attention to decay and ultimately reversing the temporal modeling orderโdegrading performance. We are the first to theoretically analyze and empirically validate this mechanism, precisely localizing the inversion onset stage. To address it, we propose an unsupervised frame-label alignment method based on label log-probability gradients, enabling fine-grained temporal localization without manual annotations. We further design and evaluate multiple inversion-mitigation strategies. Experiments demonstrate that our approach significantly enhances the temporal modeling capability and interpretability of AED models, achieving high-precision label-frame alignment across multiple benchmarks.
๐ Abstract
We sometimes observe monotonically decreasing cross-attention weights in our Conformer-based global attention-based encoder-decoder (AED) models, Further investigation shows that the Conformer encoder reverses the sequence in the time dimension. We analyze the initial behavior of the decoder cross-attention mechanism and find that it encourages the Conformer encoder self-attention to build a connection between the initial frames and all other informative frames. Furthermore, we show that, at some point in training, the self-attention module of the Conformer starts dominating the output over the preceding feed-forward module, which then only allows the reversed information to pass through. We propose methods and ideas of how this flipping can be avoided and investigate a novel method to obtain label-frame-position alignments by using the gradients of the label log probabilities w.r.t. the encoder input frames.