Hey GPT-OSS, Looks Like You Got It - Now Walk Me Through It! An Assessment of the Reasoning Language Models Chain of Thought Mechanism for Digital Forensics

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face critical limitations in digital forensics—particularly low interpretability and poor admissibility as judicial evidence. Method: This paper presents the first systematic investigation into how chain-of-thought (CoT) reasoning in reasoning-oriented LLMs enhances explainability, proposing a hybrid evaluation framework integrating quantitative metrics (e.g., reasoning step count, accuracy) with qualitative expert assessments of comprehensibility. Using locally deployed gpt-oss, we ensure transparent, verifiable reasoning traces and outputs. Contribution/Results: Experiments reveal that moderate CoT depth significantly improves explanation quality and forensic suitability, whereas excessive reasoning leads to performance saturation or degradation—exposing practical boundaries of current CoT mechanisms. Our work establishes a methodological foundation and empirical basis for developing auditable, verifiable, and legally compliant AI tools for digital forensics.

Technology Category

Application Category

📝 Abstract
The use of large language models in digital forensics has been widely explored. Beyond identifying potential applications, research has also focused on optimizing model performance for forensic tasks through fine-tuning. However, limited result explainability reduces their operational and legal usability. Recently, a new class of reasoning language models has emerged, designed to handle logic-based tasks through an `internal reasoning' mechanism. Yet, users typically see only the final answer, not the underlying reasoning. One of these reasoning models is gpt-oss, which can be deployed locally, providing full access to its underlying reasoning process. This article presents the first investigation into the potential of reasoning language models for digital forensics. Four test use cases are examined to assess the usability of the reasoning component in supporting result explainability. The evaluation combines a new quantitative metric with qualitative analysis. Findings show that the reasoning component aids in explaining and validating language model outputs in digital forensics at medium reasoning levels, but this support is often limited, and higher reasoning levels do not enhance response quality.
Problem

Research questions and friction points this paper is trying to address.

Evaluates reasoning language models for digital forensics explainability
Assesses usability of internal reasoning in forensic tasks
Investigates limitations of reasoning levels in validating model outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reasoning language models for explainability
Evaluates reasoning component with quantitative metrics
Assesses local deployment for forensic transparency
🔎 Similar Papers
No similar papers found.
G
Gaëtan Michelet
Chair for Cybersecurity, University of Augsburg, Augsburg, Germany
J
Janine Schneider
Chair for Cybersecurity, University of Augsburg, Augsburg, Germany
A
Aruna Withanage
Chair for Cybersecurity, University of Augsburg, Augsburg, Germany
Frank Breitinger
Frank Breitinger
University of Augsburg
Digital forensicscybersecuritynetwork analysiscybersecurity education