🤖 AI Summary
Existing protein sequence representation learning methods suffer from insufficient discriminative capacity. Method: This paper introduces the first latent-space diffusion framework explicitly designed for discriminative representation learning. It jointly optimizes sequence autoencoding and latent-space denoising diffusion through a homogeneous/heterogeneous dual-path autoencoder architecture, and incorporates a one-parameter family of noise-intensity-controllable representations to enable explicit modulation of representation discriminability. Results: Compared to conventional masked language modeling baselines, our approach significantly improves latent representation performance across multiple protein property prediction tasks. Although it does not surpass the absolute performance of raw pre-trained embeddings, it pioneers the successful integration of diffusion modeling into discriminative protein representation learning—establishing a novel paradigm for controllable and interpretable sequence modeling.
📝 Abstract
We explore a framework for protein sequence representation learning that decomposes the task between manifold learning and distributional modelling. Specifically we present a Latent Space Diffusion architecture which combines a protein sequence autoencoder with a denoising diffusion model operating on its latent space. We obtain a one-parameter family of learned representations from the diffusion model, along with the autoencoder's latent representation. We propose and evaluate two autoencoder architectures: a homogeneous model forcing amino acids of the same type to be identically distributed in the latent space, and an inhomogeneous model employing a noise-based variant of masking. As a baseline we take a latent space learned by masked language modelling, and evaluate discriminative capability on a range of protein property prediction tasks. Our finding is twofold: the diffusion models trained on both our proposed variants display higher discriminative power than the one trained on the masked language model baseline, none of the diffusion representations achieve the performance of the masked language model embeddings themselves.