🤖 AI Summary
This work addresses the modality mismatch between graph-structured molecules and their linear string representations, which arises when structurally equivalent molecules yield divergent generation trajectories due to arbitrary linearization. To resolve this, the authors propose a structure-invariant molecular alignment mechanism that leverages autoregressive contrastive learning to align prefix hidden states sharing common suffixes, along with a token-level contrastive objective that enables the model to recognize molecular geometric symmetries while preserving efficient sequential generation. During inference, they introduce Isomorphic Beam Search (IsoBeam) to prune isomorphic decoding paths. Without altering the linear representation format, the method significantly enhances both structural fidelity and generation diversity, outperforming strong baselines across standard benchmarks and multi-objective optimization tasks with improved sample efficiency.
📝 Abstract
Linearized string representations serve as the foundation of scalable autoregressive molecular generation; however, they introduce a fundamental modality mismatch where a single molecular graph maps to multiple distinct sequences. This ambiguity leads to \textit{trajectory divergence}, where the latent representations of structurally equivalent partial graphs drift apart due to differences in linearization history. To resolve this without abandoning the efficient string formulation, we propose Structure-Invariant Generative Molecular Alignment (SIGMA). Rather than altering the linear representation, SIGMA enables the model to strictly recognize geometric symmetries via a token-level contrastive objective, which explicitly aligns the latent states of prefixes that share identical suffixes. Furthermore, we introduce Isomorphic Beam Search (IsoBeam) to eliminate isomorphic redundancy during inference by dynamically pruning equivalent paths. Empirical evaluations on standard benchmarks demonstrate that SIGMA bridges the gap between sequence scalability and graph fidelity, yielding superior sample efficiency and structural diversity in multi-parameter optimization compared to strong baselines.