A Comparative Study of Semantic Log Representations for Software Log-based Anomaly Detection

๐Ÿ“… 2026-04-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the lack of systematic evaluation of trade-offs between effectiveness and efficiency across different semantic representations in log anomaly detection, particularly under CPU deployment constraints where performance bottlenecks are prevalent. To bridge this gap, we propose QTyBERT, which integrates a lightweight, system-tailored quantized BERT (SysBE) with an unsupervised cross-system semantic enhancement module (CroSysEh). This design achieves high detection accuracy while substantially improving embedding generation efficiency. Experimental results on the BGL, Thunderbird, and Spirit datasets demonstrate that QTyBERT matches or even surpasses the detection performance of standard BERT, yet its embedding generation speed approaches that of static word embedding methods, effectively balancing accuracy and computational efficiency.
๐Ÿ“ Abstract
Recent deep learning (DL) methods for log anomaly detection increasingly rely on semantic log representation methods that convert the textual content of log events into vector embeddings as input to DL models. However, these DL methods are typically evaluated as end-to-end pipelines, while the impact of different semantic representation methods is not well understood. In this paper, we benchmark widely used semantic log representation methods, including static word embedding methods (Word2Vec, GloVe, and FastText) and the BERT-based contextual embedding method, across diverse DL models for log-event level anomaly detection on three publicly available log datasets: BGL, Thunderbird, and Spirit. We identify an effectiveness--efficiency trade off under CPU deployment settings: the BERT-based method is more effective, but incurs substantially longer log embedding generation time, limiting its practicality; static word embedding methods are efficient but are generally less effective and may yield insufficient detection performance. Motivated by this finding, we propose QTyBERT, a novel semantic log representation method that better balances this trade-off. QTyBERT uses SysBE, a lightweight BERT variant with system-specific quantization, to efficiently encode log events into vector embeddings on CPUs, and leverages CroSysEh to enhance the semantic expressiveness of these log embeddings. CroSysEh is trained unsupervisedly using unlabeled logs from multiple systems to capture the underlying semantic structure of the BERT model's embedding space. We evaluate QTyBERT against existing semantic log representation methods. Our results show that, for the DL models, using QTyBERT-generated log embeddings achieves detection effectiveness comparable to or better than BERT-generated log embeddings, while bringing log embedding generation time closer to that of static word embedding methods.
Problem

Research questions and friction points this paper is trying to address.

log anomaly detection
semantic log representation
effectiveness-efficiency trade-off
deep learning
log embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

QTyBERT
semantic log representation
log anomaly detection
BERT quantization
efficiency-effectiveness trade-off