Enhancing DNA Foundation Models to Address Masking Inefficiencies

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In DNA sequence modeling, masked language modeling (MLM) pretraining induces distributional shift between pretraining and downstream deployment: while MLM trains encoders to predict [MASK] tokens, real-world inference operates on unmasked sequences, weakening representations of non-masked tokens and wasting computation on irrelevant prediction tasks. Method: This work pioneers the integration of the masked autoencoder (MAE) paradigm into DNA foundation models, proposing a novel encoder-decoder Transformer architecture that decouples pretraining objectives from downstream requirements. We further introduce genomics-specific positional encodings and attention masking to enhance token-level representation fidelity for raw DNA sequences. Contribution/Results: On BIOSCAN-5M, our model achieves significant improvements over both causal and bidirectional MLM baselines in both closed-set and open-set DNA barcode classification, with markedly enhanced feature extraction capability—entirely without fine-tuning.

Technology Category

Application Category

📝 Abstract
Masked language modelling (MLM) as a pretraining objective has been widely adopted in genomic sequence modelling. While pretrained models can successfully serve as encoders for various downstream tasks, the distribution shift between pretraining and inference detrimentally impacts performance, as the pretraining task is to map [MASK] tokens to predictions, yet the [MASK] is absent during downstream applications. This means the encoder does not prioritize its encodings of non-[MASK] tokens, and expends parameters and compute on work only relevant to the MLM task, despite this being irrelevant at deployment time. In this work, we propose a modified encoder-decoder architecture based on the masked autoencoder framework, designed to address this inefficiency within a BERT-based transformer. We empirically show that the resulting mismatch is particularly detrimental in genomic pipelines where models are often used for feature extraction without fine-tuning. We evaluate our approach on the BIOSCAN-5M dataset, comprising over 2 million unique DNA barcodes. We achieve substantial performance gains in both closed-world and open-world classification tasks when compared against causal models and bidirectional architectures pretrained with MLM tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance DNA foundation models
Address masking inefficiencies
Improve genomic sequence modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modified encoder-decoder architecture
Addresses masking inefficiencies
BERT-based transformer optimization
🔎 Similar Papers
No similar papers found.
M
Monireh Safari
University of Waterloo
P
Pablo Millán Arias
University of Waterloo
Scott C. Lowe
Scott C. Lowe
Postdoctoral Research Fellow, Vector Institute
Machine LearningDeep learningNeuroinformaticsSelf-supervisionReasoning
L
Lila Kari
University of Waterloo
A
Angel X. Chang
Simon Fraser University, Alberta Machine Intelligence Institute (Amii)
G
Graham W. Taylor
University of Guelph, Vector Institute