Explicit Logic Channel for Validation and Enhancement of MLLMs on Zero-Shot Tasks

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability and lack of verifiable trust mechanisms in current multimodal large language models (MLLMs) when operating in zero-shot settings as black boxes. The authors propose an explicit logical reasoning channel that runs in parallel with the MLLM’s implicit inference, integrating large language models, vision foundation models, and probabilistic reasoning to support fact-based, counterfactual, and relational reasoning grounded in visual evidence. A novel consistency rate (CR) metric—requiring no ground-truth labels—is introduced to enable cross-channel validation and model selection. Experiments across two task types (MC-VQA and HC-REC) and three benchmarks demonstrate that the proposed approach significantly enhances the zero-shot performance, reliability, and interpretability of 11 mainstream MLLMs.

Technology Category

Application Category

📝 Abstract
Frontier Multimodal Large Language Models (MLLMs) exhibit remarkable capabilities in Visual-Language Comprehension (VLC) tasks. However, they are often deployed as zero-shot solution to new tasks in a black-box manner. Validating and understanding the behavior of these models become important for application to new task. We propose an Explicit Logic Channel, in parallel with the black-box model channel, to perform explicit logical reasoning for model validation, selection and enhancement. The frontier MLLM, encapsulating latent vision-language knowledge, can be considered as an Implicit Logic Channel. The proposed Explicit Logic Channel, mimicking human logical reasoning, incorporates a LLM, a VFM, and logical reasoning with probabilistic inference for factual, counterfactual, and relational reasoning over the explicit visual evidence. A Consistency Rate (CR) is proposed for cross-channel validation and model selection, even without ground-truth annotations. Additionally, cross-channel integration further improves performance in zero-shot tasks over MLLMs, grounded with explicit visual evidence to enhance trustworthiness. Comprehensive experiments conducted for two representative VLC tasks, i.e., MC-VQA and HC-REC, on three challenging benchmarks, with 11 recent open-source MLLMs from 4 frontier families. Our systematic evaluations demonstrate the effectiveness of proposed ELC and CR for model validation, selection and improvement on MLLMs with enhanced explainability and trustworthiness.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Zero-Shot Tasks
Model Validation
Explainability
Trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit Logic Channel
Zero-Shot Validation
Multimodal Large Language Models
Consistency Rate
Probabilistic Logical Reasoning
🔎 Similar Papers
No similar papers found.
M
Mei Chee Leong
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore
Ying Gu
Ying Gu
German Research Center for Artificial Intelligence
Anomaly DetectionData MiningBig DataArtificial Intelligence
H
Hui Li Tan
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore
Liyuan Li
Liyuan Li
Senior Scientist of Institute for Infocomm Research, Singapore
computer visionmachine learningpattern recognitionartificial intelligencecognitive science
N
Nancy Chen
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore