MetagenBERT: a Transformer-based Architecture using Foundational genomic Large Language Models for novel Metagenome Representation

📅 2026-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of conventional metagenomic disease prediction, which relies on incomplete reference catalogs and consequently suffers from restricted resolution and loss of raw read information. The authors propose MetagenBERT, the first framework integrating a genomic large language model with a Transformer architecture to generate metagenomic embeddings directly from raw DNA sequences in an end-to-end manner, bypassing the need for taxonomic or functional annotation. Read-level embeddings are aggregated via FAISS-accelerated K-means clustering to construct cluster abundance vectors as a unified representation. A cross-cohort transferable variant, MetagenBERT-Glob Mcardis, is also introduced. Evaluated across five gut microbiome datasets, the method matches or exceeds species-abundance baseline AUCs, maintains clustering robustness with only 10% of reads, and yields embeddings that complement abundance signals—jointly enhancing predictive performance.

Technology Category

Application Category

📝 Abstract
Metagenomic disease prediction commonly relies on species abundance tables derived from large, incomplete reference catalogs, constraining resolution and discarding valuable information contained in DNA reads. To overcome these limitations, we introduce MetagenBERT, a Transformer based framework that produces end to end metagenome embeddings directly from raw DNA sequences, without taxonomic or functional annotations. Reads are embedded using foundational genomic language models (DNABERT2 and the microbiome specialized DNABERTMS), then aggregated through a scalable clustering strategy based on FAISS accelerated KMeans. Each metagenome is represented as a cluster abundance vector summarizing the distribution of its embedded reads. We evaluate this approach on five benchmark gut microbiome datasets (Cirrhosis, T2D, Obesity, IBD, CRC). MetagenBERT achieves competitive or superior AUC performance relative to species abundance baselines across most tasks. Concatenating both representations further improves prediction, demonstrating complementarity between taxonomic and embedding derived signals. Clustering remains robust when applied to as little as 10% of reads, highlighting substantial redundancy in metagenomes and enabling major computational gains. We additionally introduce MetagenBERT Glob Mcardis, a cross cohort variant trained on the large, phenotypically diverse MetaCardis cohort and transferred to other datasets, retaining predictive signal including for unseen phenotypes, indicating the feasibility of a foundation model for metagenome representation. Robustness analyses (PERMANOVA, PERMDISP, entropy) show consistent separation of different states across subsamples. Overall, MetagenBERT provides a scalable, annotation free representation of metagenomes pointing toward future phenotype aware generalization across heterogeneous cohorts and sequencing technologies.
Problem

Research questions and friction points this paper is trying to address.

metagenomics
disease prediction
species abundance
DNA reads
reference catalogs
Innovation

Methods, ideas, or system contributions that make the work stand out.

MetagenBERT
foundation model
DNA language model
annotation-free representation
cross-cohort generalization
🔎 Similar Papers
No similar papers found.
G
Gaspar Roy
IRD, Sorbonne University, UMMISCO, 32 avenue Henry Varagnat, Bondy Cedex, France
Eugeni Belda
Eugeni Belda
Ummisco - IRD
systems biologymetagenomicsevolutionary genomicsmetabolic modelling
B
Baptiste Hennecart
IRD, Sorbonne University, UMMISCO, 32 avenue Henry Varagnat, Bondy Cedex, France
Y
Y. Chevaleyre
LAMSADE, Dauphine University, PSL Research University, Place du Maréchal de Lattre de Tassigny, Paris, France
E
E. Prifti
IRD, Sorbonne University, UMMISCO, 32 avenue Henry Varagnat, Bondy Cedex, France; Sorbonne University, INSERM, Nutriomics, 91 bvd de l’hopital 75013 Paris, France
Jean-Daniel Zucker
Jean-Daniel Zucker
Senior Researcher, UMMISCO, IRD/Sorbonne University, France
Machine LearningData ScienceAbstractionMetagenomicsNLP