TemporalDoRA: Temporal PEFT for Robust Surgical Video Question Answering

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in surgical video question answering, where linguistic variations introduce bias and existing parameter-efficient fine-tuning methods struggle to effectively model sparse temporal evidence across frames. To this end, we propose TemporalDoRA, a novel video-oriented parameter-efficient fine-tuning approach that, for the first time, integrates a lightweight temporal multi-head attention mechanism into the low-rank adaptation (LoRA) architecture. Combined with a selective weight decomposition strategy, TemporalDoRA updates only the low-rank branches to enable temporally aware parameter adaptation while keeping the backbone frozen. This design simultaneously preserves temporal consistency, robustness, and parameter efficiency. Experiments on the REAL-Colon-VQA and EndoVis18-VQA datasets demonstrate significant improvements in answer accuracy for non-template questions, and ablation studies confirm the critical role of the proposed temporal mixing mechanism in performance gains.

Technology Category

Application Category

📝 Abstract
Surgical Video Question Answering (VideoQA) requires accurate temporal grounding while remaining robust to natural variation in how clinicians phrase questions, where linguistic bias can arise. Standard Parameter Efficient Fine Tuning (PEFT) methods adapt pretrained projections without explicitly modeling frame-to-frame interactions within the adaptation pathway, limiting their ability to exploit sparse temporal evidence. We introduce TemporalDoRA, a video-specific PEFT formulation that extends Weight-Decomposed Low-Rank Adaptation by (i) inserting lightweight temporal Multi-Head Attention (MHA) inside the low-rank bottleneck of the vision encoder and (ii) selectively applying weight decomposition only to the trainable low-rank branch rather than the full adapted weight. This design enables temporally-aware updates while preserving a frozen backbone and stable scaling. By mixing information across frames within the adaptation subspace, TemporalDoRA steers updates toward temporally consistent visual cues and improves robustness with minimal parameter overhead. To benchmark this setting, we present REAL-Colon-VQA, a colonoscopy VideoQA dataset with 6,424 clip--question pairs, including paired rephrased Out-of-Template questions to evaluate sensitivity to linguistic variation. TemporalDoRA improves Out-of-Template performance, and ablation studies confirm that temporal mixing inside the low-rank branch is the primary driver of these gains. We also validate on EndoVis18-VQA adapted to short clips and observe consistent improvements on the Out-of-Template split. Code and dataset available at~\href{https://anonymous.4open.science/r/TemporalDoRA-BFC8/}{Anonymous GitHub}.
Problem

Research questions and friction points this paper is trying to address.

Surgical Video Question Answering
Temporal Grounding
Linguistic Bias
Robustness
Natural Language Variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

TemporalDoRA
Parameter-Efficient Fine-Tuning
Temporal Multi-Head Attention
Surgical Video Question Answering
Low-Rank Adaptation
🔎 Similar Papers
No similar papers found.
L
Luca Carlini
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Italy
C
Chiara Lena
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Italy
C
Cesare Hassan
IRCCS Humanitas Research Hospital, Italy
Danail Stoyanov
Danail Stoyanov
Professor of Robot Vision, University College London
Surgical VisionSurgical AISurgical RoboticsComputer Assisted InterventionsSurgical Data Science
Elena De Momi
Elena De Momi
Politecnico di Milano
medical roboticscomputer visionartificial intelligencehuman robot interaction
Sophia Bano
Sophia Bano
Assistant Professor in Robotics and AI, University College London
Computer VisionSurgical Data ScienceSurgical RoboticsComputer-assisted InterventionMedical Imaging
M
Mobarak I. Hoque
UCL Hawkes Institute and Department of Computer Science, University College London, UK; Division of Informatics, Imaging and Data Science, University of Manchester, UK