AutoDFBench 1.0: A Benchmarking Framework for Digital Forensic Tool Testing and Generated Code Evaluation

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current digital forensics tools lack a unified, automated performance evaluation framework, resulting in inconsistent validation, poor cross-tool comparability, and low reproducibility. This paper introduces AutoDFBench—the first modular, extensible, automated benchmarking framework for digital forensics—covering five core CFTT tasks: string searching, deleted file recovery, file carving, Windows Registry analysis, and SQLite data recovery. It enables standardized evaluation of traditional tools, custom scripts, and AI-generated forensic code. Innovatively integrating human-annotated ground truth (63 test cases across 10,968 scenarios) with a standardized RESTful API interface, AutoDFBench defines the AutoDFBench Score—the mean F1-score—as its primary metric. Validated on the NIST CFTT dataset, the framework significantly improves evaluation consistency, fairness, and reproducibility. AutoDFBench has been adopted by tool vendors, academic researchers, and standardization bodies.

Technology Category

Application Category

📝 Abstract
The National Institute of Standards and Technology (NIST) Computer Forensic Tool Testing (CFTT) programme has become the de facto standard for providing digital forensic tool testing and validation. However to date, no comprehensive framework exists to automate benchmarking across the diverse forensic tasks included in the programme. This gap results in inconsistent validation, challenges in comparing tools, and limited validation reproducibility. This paper introduces AutoDFBench 1.0, a modular benchmarking framework that supports the evaluation of both conventional DF tools and scripts, as well as AI-generated code and agentic approaches. The framework integrates five areas defined by the CFTT programme: string search, deleted file recovery, file carving, Windows registry recovery, and SQLite data recovery. AutoDFBench 1.0 includes ground truth data comprising of 63 test cases and 10,968 unique test scenarios, and execute evaluations through a RESTful API that produces structured JSON outputs with standardised metrics, including precision, recall, and F1~score for each test case, and the average of these F1~scores becomes the AutoDFBench Score. The benchmarking framework is validated against CFTT's datasets. The framework enables fair and reproducible comparison across tools and forensic scripts, establishing the first unified, automated, and extensible benchmarking framework for digital forensic tool testing and validation. AutoDFBench 1.0 supports tool vendors, researchers, practitioners, and standardisation bodies by facilitating transparent, reproducible, and comparable assessments of DF technologies.
Problem

Research questions and friction points this paper is trying to address.

Automates benchmarking for diverse digital forensic tasks
Enables fair and reproducible comparison of forensic tools
Provides unified evaluation for both conventional and AI-generated code
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework automates digital forensic tool benchmarking
Integrates five CFTT areas with ground truth test cases
Uses RESTful API for standardized metrics and reproducible comparisons
A
Akila Wickramasekara
Forensics and Security Research Group, School of Computer Science, University College Dublin, Belfield, Dublin 4, Ireland
T
Tharusha Mihiranga
Forensics and Security Research Group, Department of Computing&Mathematics, South East Technological University, Ireland
A
Aruna Withanage
Chair for Cybersecurity, University of Augsburg, Augsburg, Germany
B
Buddhima Weerasinghe
School of Computer Science, University of Birmingham, Birmingham, United Kingdom
Frank Breitinger
Frank Breitinger
University of Augsburg
Digital forensicscybersecuritynetwork analysiscybersecurity education
J
John Sheppard
Forensics and Security Research Group, Department of Computing&Mathematics, South East Technological University, Ireland
Mark Scanlon
Mark Scanlon
Associate Professor in Forensic Computing and Cybercrime Investigaton, University College Dublin
Digital ForensicsCybersecurityNetwork AnalyticsDigital InvestigationCyber Forensics