It's Not Just Labeling -- A Research on LLM Generated Feedback Interpretability and Image Labeling Sketch Features

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of low accuracy, poor interpretability, and high technical barriers for non-expert users in image annotation, this paper proposes a novel sketch-based, large language model (LLM)-assisted annotation paradigm. Methodologically, we introduce an interpretable correlation modeling framework linking hand-drawn sketch features with LLM-generated feedback; design a virtual annotation assistant supporting multi-prompt strategies and sketch perturbation analysis; and systematically develop synthetic data generation, sketch representation learning, LLM feedback quality assessment, and human-AI interface design. Experimental results demonstrate a strong positive correlation between sketch representation quality and LLM feedback reliability. For non-expert users, annotation efficiency improves by 3.2×, and feedback interpretability scores increase by 41%. The approach significantly enhances accessibility, scalability, and trustworthiness of annotation systems.

Technology Category

Application Category

📝 Abstract
The quality of training data is critical to the performance of machine learning applications in domains like transportation, healthcare, and robotics. Accurate image labeling, however, often relies on time-consuming, expert-driven methods with limited feedback. This research introduces a sketch-based annotation approach supported by large language models (LLMs) to reduce technical barriers and enhance accessibility. Using a synthetic dataset, we examine how sketch recognition features relate to LLM feedback metrics, aiming to improve the reliability and interpretability of LLM-assisted labeling. We also explore how prompting strategies and sketch variations influence feedback quality. Our main contribution is a sketch-based virtual assistant that simplifies annotation for non-experts and advances LLM-driven labeling tools in terms of scalability, accessibility, and explainability.
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability of LLM-generated feedback for image labeling
Reducing technical barriers in annotation via sketch-based LLM assistance
Enhancing reliability and scalability of non-expert labeling tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sketch-based annotation with LLM support
Examining sketch recognition and LLM feedback
Virtual assistant for non-expert annotation
🔎 Similar Papers
No similar papers found.