Task-Augmented Cross-View Imputation Network for Partial Multi-View Incomplete Multi-Label Classification

📅 2024-09-12
🏛️ Neural Networks
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the prevalent challenges of partial view missingness and incomplete label annotations in multi-view multi-label learning, this paper proposes a two-stage task-driven cross-view completion framework. First, a task-enhanced implicit completion mechanism is introduced: view-specific encoder-classifier modules, grounded in the information bottleneck principle, extract discriminative features; second, a semantics-augmented multi-view autoencoder reconstruction network is coupled to jointly optimize missing view recovery and multi-label classification. Extensive experiments on five benchmark datasets demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches, achieving average classification accuracy improvements of 3.2%–5.8%. The framework effectively mitigates performance degradation induced by view incompleteness, establishing a new benchmark for robust multi-view multi-label learning under partial observability.

Technology Category

Application Category

📝 Abstract
In real-world scenarios, multi-view multi-label learning often encounters the challenge of incomplete training data due to limitations in data collection and unreliable annotation processes. The absence of multi-view features impairs the comprehensive understanding of samples, omitting crucial details essential for classification. To address this issue, we present a task-augmented cross-view imputation network (TACVI-Net) for the purpose of handling partial multi-view incomplete multi-label classification. Specifically, we employ a two-stage network to derive highly task-relevant features to recover the missing views. In the first stage, we leverage the information bottleneck theory to obtain a discriminative representation of each view by extracting task-relevant information through a view-specific encoder-classifier architecture. In the second stage, an autoencoder based multi-view reconstruction network is utilized to extract high-level semantic representation of the augmented features and recover the missing data, thereby aiding the final classification task. Extensive experiments on five datasets demonstrate that our TACVI-Net outperforms other state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Handles partial multi-view incomplete multi-label classification
Recovers missing multi-view features using task-relevant information
Addresses incomplete training data from unreliable annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage network recovers missing views
Information bottleneck extracts task-relevant features
Autoencoder reconstructs multi-view semantic representation
🔎 Similar Papers
No similar papers found.
X
Xiaohuan Lu
College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China
Lian Zhao
Lian Zhao
Toronto Metropolitan University
Resource managementIoV/IoT NetworksMobile Edge Computing
W
W. Wong
Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong, and also with The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen 518055, China
Jie Wen
Jie Wen
Associate Professor, North University of China(NUC)
Quantum ControlPrognostic and Health Management
J
Jiang Long
College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China
Wulin Xie
Wulin Xie
Institute of Automation, Chinese Academy of Sciences
MLLMMulti-Modal