🤖 AI Summary
Large language models (LLMs) exhibit significant deficiencies in negation understanding and logical reasoning, yet this issue remains systematically unexplored in multilingual settings.
Method: We introduce the first bilingual (Chinese–English) natural language inference (NLI) dataset with aligned negation-sensitive pairs and propose a robustness analysis framework for negation sensitivity. Our evaluation spans model scales (70M–72B parameters), languages (Chinese, English, French, Spanish), and premise characteristics (length and explicitness of negation).
Contribution/Results: We find that (1) negation reasoning accuracy improves monotonically with model scale; (2) premise structural features—particularly length and negation explicitness—exert greater influence on performance than language identity; and (3) we establish the first multilingual paired-NLI evaluation paradigm. The work delivers a reproducible benchmark dataset and open-source quantitative analysis tools, setting a new standard for assessing multilingual logical reasoning capabilities in LLMs.
📝 Abstract
Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored. We construct two multilingual natural language inference (NLI) datasets with extit{paired} examples differing in negation. We investigate how model size and language impact its ability to handle negation correctly by evaluating popular LLMs. Contrary to previous work, we show that increasing the model size consistently improves the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have a greater impact on robustness than language. Our datasets can facilitate further research and improvements of language model reasoning in multilingual settings.