CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key challenges in self-supervised topological representation learning on cell complexes—degraded higher-order structural fidelity and semantic redundancy interference—this paper proposes a novel contrastive learning framework. Methodologically: (i) it introduces parameter-perturbed topological-preserving augmentation, explicitly respecting the combinatorial constraints of cell complexes; (ii) it incorporates a two-level meta-learning-driven cell pruning mechanism that adaptively removes redundant topological units via gradient masking. Theoretically, the framework guarantees robustness of augmentations and convergence of optimization. Empirically, it significantly outperforms existing self-supervised graph learning baselines across multiple topology-aware tasks. Notably, it achieves the first robust unsupervised representation learning for cell-complex-level higher-order interactions. This work establishes a new paradigm for higher-order structural modeling—one that is provably sound, controllably optimized, and scalable.

Technology Category

Application Category

📝 Abstract
Self-supervised topological deep learning (TDL) represents a nascent but underexplored area with significant potential for modeling higher-order interactions in simplicial complexes and cellular complexes to derive representations of unlabeled graphs. Compared to simplicial complexes, cellular complexes exhibit greater expressive power. However, the advancement in self-supervised learning for cellular TDL is largely hindered by two core challenges: extit{extrinsic structural constraints} inherent to cellular complexes, and intrinsic semantic redundancy in cellular representations. The first challenge highlights that traditional graph augmentation techniques may compromise the integrity of higher-order cellular interactions, while the second underscores that topological redundancy in cellular complexes potentially diminish task-relevant information. To address these issues, we introduce Cellular Complex Contrastive Learning with Adaptive Trimming (CellCLAT), a twofold framework designed to adhere to the combinatorial constraints of cellular complexes while mitigating informational redundancy. Specifically, we propose a parameter perturbation-based augmentation method that injects controlled noise into cellular interactions without altering the underlying cellular structures, thereby preserving cellular topology during contrastive learning. Additionally, a cellular trimming scheduler is employed to mask gradient contributions from task-irrelevant cells through a bi-level meta-learning approach, effectively removing redundant topological elements while maintaining critical higher-order semantics. We provide theoretical justification and empirical validation to demonstrate that CellCLAT achieves substantial improvements over existing self-supervised graph learning methods, marking a significant attempt in this domain.
Problem

Research questions and friction points this paper is trying to address.

Preserving cellular topology in self-supervised contrastive learning
Reducing semantic redundancy in cellular complex representations
Adapting graph augmentation for higher-order cellular interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter perturbation preserves cellular topology
Cellular trimming scheduler removes redundant elements
Bi-level meta-learning maintains critical semantics
🔎 Similar Papers
No similar papers found.
Bin Qin
Bin Qin
Institute of Software Chinese Academy of Sciences
Machine LearningCausal Inference
Qirui Ji
Qirui Ji
Institute of Software, Chinese Academy of Science
Graph representation learningCausal learning
Jiangmeng Li
Jiangmeng Li
Institute of Software, Chinese Academy of Science
Multi-modal learningSelf-supervised learningDomain generalizationCausal learning
Y
Yu-Peng Wang
Institute of Software, Chinese Academy of Sciences
X
Xuesong Wu
Institute of Software, Chinese Academy of Sciences
J
Jianwen Cao
Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences
F
Fanjiang Xu
Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences