🤖 AI Summary
Accurate prediction of antibody–antigen (Ab–Ag) interfaces is critical for vaccine design and therapeutic antibody development, yet high-precision epitope/paratope identification from sequence alone remains challenging. This paper introduces SeqEpitope, a structure-free, sequence-driven model that innovatively incorporates a physics-informed sliding attention mechanism to explicitly capture distance-dependent residue interactions, enabling pan-epitope prediction without requiring prior antibody knowledge. Built upon the Conformer architecture, SeqEpitope jointly integrates local convolutional features with global sliding attention—replacing conventional cross-attention—to enhance interface contact modeling. On the SARS-CoV-2 Ab–Ag benchmark, SeqEpitope achieves state-of-the-art performance, substantially outperforming existing sequence-only methods. Ablation studies confirm the essential contributions of both the sliding attention mechanism and the physics-based constraint module.
📝 Abstract
Accurate prediction of antibody-antigen (Ab-Ag) interfaces is critical for vaccine design, immunodiagnostics, and therapeutic antibody development. However, achieving reliable predictions from sequences alone remains a challenge. In this paper, we present ABCONFORMER, a model based on the Conformer backbone that captures both local and global features of a biosequence. To accurately capture Ab-Ag interactions, we introduced the physics-inspired sliding attention, enabling residue-level contact recovery without relying on three-dimensional structural data. ABConformer can accurately predict paratopes and epitopes given the antibody and antigen sequence, and predict pan-epitopes on the antigen without antibody information. In comparison experiments, ABCONFORMER achieves state-of-the-art performance on a recent SARS-CoV-2 Ab-Ag dataset, and surpasses widely used sequence-based methods for antibody-agnostic epitope prediction. Ablation studies further quantify the contribution of each component, demonstrating that, compared to conventional cross-attention, sliding attention significantly enhances the precision of epitope prediction. To facilitate reproducibility, we will release the code under an open-source license upon acceptance.