Three-Stream Temporal-Shift Attention Network Based on Self-Knowledge Distillation for Micro-Expression Recognition

📅 2024-06-25
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Micro-expression recognition faces two major challenges: extremely subtle facial muscle movements and severe scarcity of labeled training data. To address these, this paper proposes a Three-Stream Spatiotemporal Attention Network (TSAN), the first to introduce self-knowledge distillation into this task. TSAN integrates three complementary streams: learnable motion magnification, Efficient Channel Attention (ECA), and parameter-free Temporal Shift Module (TSM). Notably, the TSM enables zero-parameter cross-temporal motion fusion, while self-knowledge distillation enhances feature discriminability via multi-level auxiliary classifiers and deep supervision. Extensive experiments demonstrate that TSAN achieves state-of-the-art performance on four benchmark datasets—CASME II, SAMM, MMEW, and CAS(ME)³—outperforming all existing methods by significant margins.

Technology Category

Application Category

📝 Abstract
Micro-expressions are subtle facial movements that occur spontaneously when people try to conceal real emotions. Micro-expression recognition is crucial in many fields, including criminal analysis and psychotherapy. However, micro-expression recognition is challenging since micro-expressions have low intensity and public datasets are small in size. To this end, a three-stream temporal-shift attention network based on self-knowledge distillation called SKD-TSTSAN is proposed in this paper. Firstly, to address the low intensity of muscle movements, we utilize learning-based motion magnification modules to enhance the intensity of muscle movements. Secondly, we employ efficient channel attention modules in the local-spatial stream to make the network focus on facial regions that are highly relevant to micro-expressions. In addition, temporal shift modules are used in the dynamic-temporal stream, which enables temporal modeling with no additional parameters by mixing motion information from two different temporal domains. Furthermore, we introduce self-knowledge distillation into the micro-expression recognition task by introducing auxiliary classifiers and using the deepest section of the network for supervision, encouraging all blocks to fully explore the features of the training set. Finally, extensive experiments are conducted on four public datasets: CASME II, SAMM, MMEW, and CAS(ME)3. The experimental results demonstrate that our SKD-TSTSAN outperforms other existing methods and achieves new state-of-the-art performance. Our code will be available at https://github.com/GuanghaoZhu663/SKD-TSTSAN.
Problem

Research questions and friction points this paper is trying to address.

Enhancing low-intensity micro-expression muscle movements
Focusing on facial regions relevant to micro-expressions
Addressing small dataset size via self-knowledge distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning-based motion magnification enhances muscle movements
Efficient channel attention focuses on key facial regions
Temporal shift modules enable parameter-free temporal modeling
🔎 Similar Papers
No similar papers found.
G
Guanghao Zhu
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
L
Lin Liu
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
Y
Yuhao Hu
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
H
Haixin Sun
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
F
Fang Liu
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
X
Xiaohui Du
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
Ruqian Hao
Ruqian Hao
University of electronic science and technology of China
Medical image processingdeep learningactive learning
J
Juanxiu Liu
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
Y
Yong Liu
School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
Hao Deng
Hao Deng
Engineer
recommendation system
J
Jing Zhang
MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China