GRPO++: Enhancing Dermatological Reasoning under Low Resource Settings

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dermatological vision-language models (VLMs) exhibit weak structured reasoning capabilities under low-resource settings, where annotated dermatological data are scarce and advanced training methods incur prohibitive computational costs. Method: We propose GRPO++, a novel framework that extends Group Relative Policy Optimization (GRPO) with a knowledge-graph-driven preference alignment mechanism, synergistically integrating supervised fine-tuning (SFT) and direct preference optimization (DPO) into a multi-stage, dermatology-informed training pipeline. Contribution/Results: GRPO++ significantly reduces dependency on both labeled data and computational resources. Evaluated on a curated dermatological dataset, it substantially outperforms standard fine-tuning in disease classification accuracy and medical dialogue generation quality. The resulting end-to-end system is scalable, interpretable, and resource-efficient—advancing practical deployment of VLMs in clinical dermatology.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) show promise in medical image analysis, yet their capacity for structured reasoning in complex domains like dermatology is often limited by data scarcity and the high computational cost of advanced training techniques. To address these challenges, we introduce DermIQ-VLM, a VLM developed through a multi-stage, resource-efficient methodology designed to emulate a dermatologist's diagnostic process. Our primary contribution is a modified version of Grouped Relative Policy Optimization (GRPO), called GRPO++, which stabilizes the powerful but data-intensive GRPO framework. Our proposed training pipeline first employs GRPO++ for reasoning-oriented disease recognition, followed by supervised fine-tuning for conversational ability. To mitigate factual errors introduced during this step, we then align the model using Direct Preference Optimization (DPO), leveraging a Knowledge Graph-based system as a scalable proxy for expert preference. A preliminary evaluation on a curated dermatological dataset demonstrates that our proposed methodology yields notable performance gains over standard fine-tuning approaches. These findings validate the potential of our pipeline as a feasible pathway for developing specialized, reliable VLMs in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Enhancing dermatological reasoning with limited data resources
Stabilizing data-intensive optimization for medical VLMs
Mitigating factual errors in conversational medical AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRPO++ stabilizes reasoning in low-resource settings
Multi-stage pipeline combines reasoning and conversation training
Knowledge Graph alignment reduces factual errors efficiently
🔎 Similar Papers
No similar papers found.
I
Ismam Nur Swapnil
Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka-1205, Bangladesh
A
Aranya Saha
Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka-1205, Bangladesh
Tanvir Ahmed Khan
Tanvir Ahmed Khan
Columbia University
Computer ArchitectureSoftware SystemsProgramming Languages
Mohammad Ariful Haque
Mohammad Ariful Haque
Professor of Bangladesh University of Engineering and Technology
Signal processingdeep learning