SKA-Bench: A Fine-Grained Benchmark for Evaluating Structured Knowledge Understanding of LLMs

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods inadequately characterize large language models’ (LLMs) structured knowledge understanding: they lack fine-grained capability decomposition and cover only a single knowledge modality (e.g., pure knowledge graphs or tables). To address this, we propose SKA-Bench—the first fine-grained benchmark encompassing four knowledge modalities: knowledge graphs, tables, and their textual hybrids (KG+Text, Table+Text). Its core innovation is a four-stage capability testbed assessing noise robustness, order invariance, cross-modal information fusion, and negative instance rejection. We generate knowledge-enhanced QA instances—including noisy and positive examples—via a three-stage pipeline. Experiments across eight state-of-the-art LLMs reveal pervasive vulnerabilities: sensitivity to noise and input ordering, hallucination tendencies, and significant performance degradation under low-quality, poorly ordered, or redundant knowledge.

Technology Category

Application Category

📝 Abstract
Although large language models (LLMs) have made significant progress in understanding Structured Knowledge (SK) like KG and Table, existing evaluations for SK understanding are non-rigorous (i.e., lacking evaluations of specific capabilities) and focus on a single type of SK. Therefore, we aim to propose a more comprehensive and rigorous structured knowledge understanding benchmark to diagnose the shortcomings of LLMs. In this paper, we introduce SKA-Bench, a Structured Knowledge Augmented QA Benchmark that encompasses four widely used structured knowledge forms: KG, Table, KG+Text, and Table+Text. We utilize a three-stage pipeline to construct SKA-Bench instances, which includes a question, an answer, positive knowledge units, and noisy knowledge units. To evaluate the SK understanding capabilities of LLMs in a fine-grained manner, we expand the instances into four fundamental ability testbeds: Noise Robustness, Order Insensitivity, Information Integration, and Negative Rejection. Empirical evaluations on 8 representative LLMs, including the advanced DeepSeek-R1, indicate that existing LLMs still face significant challenges in understanding structured knowledge, and their performance is influenced by factors such as the amount of noise, the order of knowledge units, and hallucination phenomenon. Our dataset and code are available at https://github.com/Lza12a/SKA-Bench.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' structured knowledge understanding rigorously
Covers multiple knowledge forms: KG, Table, KG+Text, Table+Text
Tests fine-grained abilities like noise robustness, order insensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage pipeline constructs benchmark instances
Four fundamental ability testbeds for evaluation
Includes KG, Table, KG+Text, Table+Text forms
🔎 Similar Papers
Z
Zhiqiang Liu
School of Software Technology, Zhejiang University
E
Enpei Niu
School of Software Technology, Zhejiang University
Y
Yin Hua
School of Software Technology, Zhejiang University
Mengshu Sun
Mengshu Sun
Beijing University of Technology
Deep LearningModel Compression and Acceleration
Lei Liang
Lei Liang
Ant Group
Knowledge GraphAI
H
Huajun Chen
College of Computer Science and Technology, Zhejiang University; ZJU-Ant Group Joint Lab of Knowledge Graph
W
Wen Zhang
School of Software Technology, Zhejiang University; ZJU-Ant Group Joint Lab of Knowledge Graph