Negation: A Pink Elephant in the Large Language Models' Room?

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant deficiencies in negation understanding and logical reasoning, yet this issue remains systematically unexplored in multilingual settings. Method: We introduce the first bilingual (Chinese–English) natural language inference (NLI) dataset with aligned negation-sensitive pairs and propose a robustness analysis framework for negation sensitivity. Our evaluation spans model scales (70M–72B parameters), languages (Chinese, English, French, Spanish), and premise characteristics (length and explicitness of negation). Contribution/Results: We find that (1) negation reasoning accuracy improves monotonically with model scale; (2) premise structural features—particularly length and negation explicitness—exert greater influence on performance than language identity; and (3) we establish the first multilingual paired-NLI evaluation paradigm. The work delivers a reproducible benchmark dataset and open-source quantitative analysis tools, setting a new standard for assessing multilingual logical reasoning capabilities in LLMs.

Technology Category

Application Category

📝 Abstract
Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored. We construct two multilingual natural language inference (NLI) datasets with extit{paired} examples differing in negation. We investigate how model size and language impact its ability to handle negation correctly by evaluating popular LLMs. Contrary to previous work, we show that increasing the model size consistently improves the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have a greater impact on robustness than language. Our datasets can facilitate further research and improvements of language model reasoning in multilingual settings.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with negation understanding in sentences
Impact of model size and language on negation handling
Multilingual datasets for improving negation reasoning in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Construct multilingual NLI datasets with paired negation examples
Evaluate LLMs on negation handling across model sizes and languages
Show model size improves negation robustness more than language
🔎 Similar Papers
No similar papers found.