Self-Supervised Discriminative Feature Learning for Deep Multi-View Clustering

📅 2021-03-28
🏛️ IEEE Transactions on Knowledge and Data Engineering
📈 Citations: 177
Influential: 6
📄 PDF
🤖 AI Summary
In multi-view clustering, ambiguous structural information in certain views degrades overall performance. Method: This paper proposes a self-supervised discriminative feature learning framework. It employs deep autoencoders to independently extract view-specific embeddings and introduces a novel global feature concatenation mechanism to enhance robustness. A unified target distribution is constructed under pseudo-label guidance, and multi-view collaborative optimization is driven by KL divergence, jointly improving feature discriminability and clustering consistency during iterative refinement. The framework balances view diversity with clustering coherence. Contribution/Results: Extensive experiments on multiple benchmark datasets demonstrate significant superiority over 14 classical and state-of-the-art methods. The open-sourced implementation validates the framework’s effectiveness, robustness, and generalizability.
📝 Abstract
Multi-view clustering is an important research topic due to its capability to utilize complementary information from multiple views. However, there are few methods to consider the negative impact caused by certain views with unclear clustering structures, resulting in poor multi-view clustering performance. To address this drawback, we propose self-supervised discriminative feature learning for deep multi-view clustering (SDMVC). Concretely, deep autoencoders are applied to learn embedded features for each view independently. To leverage the multi-view complementary information, we concatenate all views’ embedded features to form the global features, which can overcome the negative impact of some views’ unclear clustering structures. In a self-supervised manner, pseudo-labels are obtained to build a unified target distribution to perform multi-view discriminative feature learning. During this process, global discriminative information can be mined to supervise all views to learn more discriminative features, which in turn are used to update the target distribution. Besides, this unified target distribution can make SDMVC learn consistent cluster assignments, which accomplishes the clustering consistency of multiple views while preserving their features’ diversity. Experiments on various types of multi-view datasets show that SDMVC outperforms 14 competitors including classic and state-of-the-art methods. The code is available at https://github.com/SubmissionsIn/SDMVC.
Problem

Research questions and friction points this paper is trying to address.

Addresses unclear clustering structures in multi-view data
Leverages complementary information across multiple views
Ensures consistent cluster assignments while preserving feature diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep autoencoders learn embedded features per view
Self-supervised pseudo-labels create unified target distribution
Global discriminative features supervise multi-view consistency
🔎 Similar Papers
No similar papers found.
J
Jie Xu
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Y
Yazhou Ren
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
H
Huayi Tang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Z
Zhimeng Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Lili Pan
Lili Pan
Associate Professor, University of Electronic Science and Technology of China
Computer visionMachine learning
Y
Yang Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
X
X. Pu
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China