🤖 AI Summary
This study addresses the unclear capabilities of large language models (LLMs) in moderately complex statistical reasoning tasks and their ability to self-assess the quality of their own reasoning. The authors construct a specialized dataset to fine-tune open-source LLMs and systematically evaluate their performance against human benchmarks derived from statistics students. The work demonstrates, for the first time, that fine-tuned LLMs achieve performance approaching that of human experts in statistical reasoning tasks, and their self-evaluation capabilities significantly outperform conventional automatic metrics such as BLEU and BERTScore. This approach proves effective across diverse model architectures, particularly excelling in advanced statistical tasks, and holds broad applicability in educational technology, automated data analysis, and validation of scientific methodologies.
📝 Abstract
This paper investigates the ability of large language models (LLMs) to solve statistical tasks, as well as their capacity to assess the quality of reasoning. While state-of-the-art LLMs have demonstrated remarkable performance in a range of NLP tasks, their competence in addressing even moderately complex statistical challenges is not well understood. We have fine-tuned selected open-source LLMs on a specially developed dataset to enhance their statistical reasoning capabilities, and compared their performance with the human scores used as a benchmark. Our results show that the fine-tuned models achieve better performance on advanced statistical tasks on the level comparable to a statistics student. Fine-tuning demonstrates architecture-dependent improvements, with some models showing significant performance gains, indicating clear potential for deployment in educational technology and statistical analysis assistance systems. We also show that LLMs themselves can be far better judges of the answers quality (including explanation and reasoning assessment) in comparison to traditional metrics, such as BLEU or BertScore. This self-evaluation capability enables scalable automated assessment for statistical education platforms and quality assurance in automated analysis tools. Potential applications also include validation tools for research methodology in academic and industry settings, and quality control mechanisms for data analysis workflows.