Explainable Artificial Intelligence techniques for interpretation of food datasets: a review

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI models in food engineering suffer from limited interpretability due to their “black-box” nature, hindering trustworthy deployment in quality control. This paper presents a systematic review of eXplainable AI (XAI) in food engineering, introducing— for the first time—a two-dimensional taxonomy linking data modalities (e.g., hyperspectral, image, and sensor time-series data) with explanation methods (e.g., SHAP, Grad-CAM, LIME, attention visualization, and surrogate models). Synthesizing insights from 37 pivotal studies, we identify three key application trends: real-time quality control, traceability analysis, and regulatory compliance. We further articulate four critical implementation challenges: data heterogeneity, explanation fidelity, domain-specific adaptability, and human-AI collaboration. This work fills a significant gap by providing the first comprehensive, domain-tailored XAI review for food engineering, establishing both a theoretical framework and actionable guidelines for deploying trustworthy AI in food safety regulation.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) has become essential for analyzing complex data and solving highly-challenging tasks. It is being applied across numerous disciplines beyond computer science, including Food Engineering, where there is a growing demand for accurate and trustworthy predictions to meet stringent food quality standards. However, this requires increasingly complex AI models, raising reliability concerns. In response, eXplainable AI (XAI) has emerged to provide insights into AI decision-making, aiding model interpretation by developers and users. Nevertheless, XAI remains underutilized in Food Engineering, limiting model reliability. For instance, in food quality control, AI models using spectral imaging can detect contaminants or assess freshness levels, but their opaque decision-making process hinders adoption. XAI techniques such as SHAP (Shapley Additive Explanations) and Grad-CAM (Gradient-weighted Class Activation Mapping) can pinpoint which spectral wavelengths or image regions contribute most to a prediction, enhancing transparency and aiding quality control inspectors in verifying AI-generated assessments. This survey presents a taxonomy for classifying food quality research using XAI techniques, organized by data types and explanation methods, to guide researchers in choosing suitable approaches. We also highlight trends, challenges, and opportunities to encourage the adoption of XAI in Food Engineering.
Problem

Research questions and friction points this paper is trying to address.

Enhancing transparency in AI for food quality analysis
Addressing reliability concerns in complex food engineering models
Promoting XAI adoption to interpret spectral imaging decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SHAP for transparent food quality predictions
Applies Grad-CAM to interpret spectral imaging data
Classifies XAI techniques by data and explanation methods
🔎 Similar Papers
No similar papers found.